Sample records for parallel engineering optimisation

  1. Multidisciplinary Design Optimisation (MDO) Methods: Their Synergy with Computer Technology in the Design Process

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1999-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  2. Reverse engineering a gene network using an asynchronous parallel evolution strategy

    PubMed Central

    2010-01-01

    Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855

  3. Optimisation of a parallel ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  4. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-08-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.

  5. Low-Speed Investigation of Upper-Surface Leading-Edge Blowing on a High-Speed Civil Transport Configuration

    NASA Technical Reports Server (NTRS)

    Banks, Daniel W.; Laflin, Brenda E. Gile; Kemmerly, Guy T.; Campbell, Bryan A.

    1999-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  6. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.

    PubMed

    Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre

    2017-06-01

    We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.

  7. ATLAS software configuration and build tool optimisation

    NASA Astrophysics Data System (ADS)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.

  8. Computational aero-acoustics for fan duct propagation and radiation. Current status and application to turbofan liner optimisation

    NASA Astrophysics Data System (ADS)

    Astley, R. J.; Sugimoto, R.; Mustafi, P.

    2011-08-01

    Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.

  9. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  10. An Optimisation Procedure for the Conceptual Analysis of Different Aerodynamic Configurations

    DTIC Science & Technology

    2000-06-01

    G. Lombardi, G. Mengali Department of Aerospace Engineering , University of Pisa Via Diotisalvi 2, 56126 PISA, Italy F. Beux Scuola Normale Superiore...obtain engines , gears and various systems; their weights and centre configurations with improved performances with respect to a of gravity positions...design parameters have been arranged for The optimisation process includes the following steps: cruise: payload, velocity, range, cruise height, engine

  11. A shrinking hypersphere PSO for engineering optimisation problems

    NASA Astrophysics Data System (ADS)

    Yadav, Anupam; Deep, Kusum

    2016-03-01

    Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.

  12. Optimising the Parallelisation of OpenFOAM Simulations

    DTIC Science & Technology

    2014-06-01

    UNCLASSIFIED UNCLASSIFIED Optimising the Parallelisation of OpenFOAM Simulations Shannon Keough Maritime Division Defence...Science and Technology Organisation DSTO-TR-2987 ABSTRACT The OpenFOAM computational fluid dynamics toolbox allows parallel computation of...performance of a given high performance computing cluster with several OpenFOAM cases, running using a combination of MPI libraries and corresponding MPI

  13. Producing a glycosylating Escherichia coli cell factory: The placement of the bacterial oligosaccharyl transferase pglB onto the genome.

    PubMed

    Strutton, Benjamin; Jaffé, Stephen R P; Pandhal, Jagroop; Wright, Phillip C

    2018-01-01

    Although Escherichia coli has been engineered to perform N-glycosylation of recombinant proteins, an optimal glycosylating strain has not been created. By inserting a codon optimised Campylobacter oligosaccharyltransferase onto the E. coli chromosome, we created a glycoprotein platform strain, where the target glycoprotein, sugar synthesis and glycosyltransferase enzymes, can be inserted using expression vectors to produce the desired homogenous glycoform. To assess the functionality and glycoprotein producing capacity of the chromosomally based OST, a combined Western blot and parallel reaction monitoring mass spectrometry approach was applied, with absolute quantification of glycoprotein. We demonstrated that chromosomal oligosaccharyltransferase remained functional and facilitated N-glycosylation. Although the engineered strain produced less total recombinant protein, the glycosylation efficiency increased by 85%, and total glycoprotein production was enhanced by 17%. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  15. A supportive architecture for CFD-based design optimisation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.

  16. Optimisation of insect cell growth in deep-well blocks: development of a high-throughput insect cell expression screen.

    PubMed

    Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian

    2005-01-01

    This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.

  17. Improved packing of protein side chains with parallel ant colonies.

    PubMed

    Quan, Lijun; Lü, Qiang; Li, Haiou; Xia, Xiaoyan; Wu, Hongjie

    2014-01-01

    The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms.

  18. A high performance data parallel tensor contraction framework: Application to coupled electro-mechanics

    NASA Astrophysics Data System (ADS)

    Poya, Roman; Gil, Antonio J.; Ortigosa, Rogelio

    2017-07-01

    The paper presents aspects of implementation of a new high performance tensor contraction framework for the numerical analysis of coupled and multi-physics problems on streaming architectures. In addition to explicit SIMD instructions and smart expression templates, the framework introduces domain specific constructs for the tensor cross product and its associated algebra recently rediscovered by Bonet et al. (2015, 2016) in the context of solid mechanics. The two key ingredients of the presented expression template engine are as follows. First, the capability to mathematically transform complex chains of operations to simpler equivalent expressions, while potentially avoiding routes with higher levels of computational complexity and, second, to perform a compile time depth-first or breadth-first search to find the optimal contraction indices of a large tensor network in order to minimise the number of floating point operations. For optimisations of tensor contraction such as loop transformation, loop fusion and data locality optimisations, the framework relies heavily on compile time technologies rather than source-to-source translation or JIT techniques. Every aspect of the framework is examined through relevant performance benchmarks, including the impact of data parallelism on the performance of isomorphic and nonisomorphic tensor products, the FLOP and memory I/O optimality in the evaluation of tensor networks, the compilation cost and memory footprint of the framework and the performance of tensor cross product kernels. The framework is then applied to finite element analysis of coupled electro-mechanical problems to assess the speed-ups achieved in kernel-based numerical integration of complex electroelastic energy functionals. In this context, domain-aware expression templates combined with SIMD instructions are shown to provide a significant speed-up over the classical low-level style programming techniques.

  19. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  20. Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)

    NASA Astrophysics Data System (ADS)

    Gorman, Richard M.; Oliver, Hilary J.

    2018-06-01

    Most geophysical models include many parameters that are not fully determined by theory, and can be tuned to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.

  1. Improved packing of protein side chains with parallel ant colonies

    PubMed Central

    2014-01-01

    Introduction The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. Methods We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. Results We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. Conclusions This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms. PMID:25474164

  2. Biocatalysis engineering: the big picture.

    PubMed

    Sheldon, Roger A; Pereira, Pedro C

    2017-05-22

    In this tutorial review we describe a holistic approach to the invention, development and optimisation of biotransformations utilising isolated enzymes. Increasing attention to applied biocatalysis is motivated by its numerous economic and environmental benefits. Biocatalysis engineering concerns the development of enzymatic systems as a whole, which entails engineering its different components: substrate engineering, medium engineering, protein (enzyme) engineering, biocatalyst (formulation) engineering, biocatalytic cascade engineering and reactor engineering.

  3. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  4. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  5. Multi-Objective and Multidisciplinary Design Optimisation (MDO) of UAV Systems using Hierarchical Asynchronous Parallel Evolutionary Algorithms

    DTIC Science & Technology

    2007-09-17

    been proposed; these include a combination of variable fidelity models, parallelisation strategies and hybridisation techniques (Coello, Veldhuizen et...Coello et al (Coello, Veldhuizen et al. 2002). 4.4.2 HIERARCHICAL POPULATION TOPOLOGY A hierarchical population topology, when integrated into...to hybrid parallel Multi-Objective Evolutionary Algorithms (pMOEA) (Cantu-Paz 2000; Veldhuizen , Zydallis et al. 2003); it uses a master slave

  6. Thermal buckling optimisation of composite plates using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Kamarian, S.; Shakeri, M.; Yas, M. H.

    2017-07-01

    Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.

  7. Systemic solutions for multi-benefit water and environmental management.

    PubMed

    Everard, Mark; McInnes, Robert

    2013-09-01

    The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Acoustic Resonator Optimisation for Airborne Particle Manipulation

    NASA Astrophysics Data System (ADS)

    Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian

    Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.

  9. Roadmap to the multidisciplinary design analysis and optimisation of wind energy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Moreno, S. Sanchez; Zaaijer, M. B.; Bottasso, C. L.

    Here, a research agenda is described to further encourage the application of Multidisciplinary Design Analysis and Optimisation (MDAO) methodologies to wind energy systems. As a group of researchers closely collaborating within the International Energy Agency (IEA) Wind Task 37 for Wind Energy Systems Engineering: Integrated Research, Design and Development, we have identified challenges that will be encountered by users building an MDAO framework. This roadmap comprises 17 research questions and activities recognised to belong to three research directions: model fidelity, system scope and workflow architecture. It is foreseen that sensible answers to all these questions will enable to more easilymore » apply MDAO in the wind energy domain. Beyond the agenda, this work also promotes the use of systems engineering to design, analyse and optimise wind turbines and wind farms, to complement existing compartmentalised research and design paradigms.« less

  10. Roadmap to the multidisciplinary design analysis and optimisation of wind energy systems

    DOE PAGES

    Perez-Moreno, S. Sanchez; Zaaijer, M. B.; Bottasso, C. L.; ...

    2016-10-03

    Here, a research agenda is described to further encourage the application of Multidisciplinary Design Analysis and Optimisation (MDAO) methodologies to wind energy systems. As a group of researchers closely collaborating within the International Energy Agency (IEA) Wind Task 37 for Wind Energy Systems Engineering: Integrated Research, Design and Development, we have identified challenges that will be encountered by users building an MDAO framework. This roadmap comprises 17 research questions and activities recognised to belong to three research directions: model fidelity, system scope and workflow architecture. It is foreseen that sensible answers to all these questions will enable to more easilymore » apply MDAO in the wind energy domain. Beyond the agenda, this work also promotes the use of systems engineering to design, analyse and optimise wind turbines and wind farms, to complement existing compartmentalised research and design paradigms.« less

  11. Integration of PGD-virtual charts into an engineering design process

    NASA Astrophysics Data System (ADS)

    Courard, Amaury; Néron, David; Ladevèze, Pierre; Ballere, Ludovic

    2016-04-01

    This article deals with the efficient construction of approximations of fields and quantities of interest used in geometric optimisation of complex shapes that can be encountered in engineering structures. The strategy, which is developed herein, is based on the construction of virtual charts that allow, once computed offline, to optimise the structure for a negligible online CPU cost. These virtual charts can be used as a powerful numerical decision support tool during the design of industrial structures. They are built using the proper generalized decomposition (PGD) that offers a very convenient framework to solve parametrised problems. In this paper, particular attention has been paid to the integration of the procedure into a genuine engineering design process. In particular, a dedicated methodology is proposed to interface the PGD approach with commercial software.

  12. Optimising the production of succinate and lactate in Escherichia coli using a hybrid of artificial bee colony algorithm and minimisation of metabolic adjustment.

    PubMed

    Tang, Phooi Wah; Choon, Yee Wen; Mohamad, Mohd Saberi; Deris, Safaai; Napis, Suhaimi

    2015-03-01

    Metabolic engineering is a research field that focuses on the design of models for metabolism, and uses computational procedures to suggest genetic manipulation. It aims to improve the yield of particular chemical or biochemical products. Several traditional metabolic engineering methods are commonly used to increase the production of a desired target, but the products are always far below their theoretical maximums. Using numeral optimisation algorithms to identify gene knockouts may stall at a local minimum in a multivariable function. This paper proposes a hybrid of the artificial bee colony (ABC) algorithm and the minimisation of metabolic adjustment (MOMA) to predict an optimal set of solutions in order to optimise the production rate of succinate and lactate. The dataset used in this work was from the iJO1366 Escherichia coli metabolic network. The experimental results include the production rate, growth rate and a list of knockout genes. From the comparative analysis, ABCMOMA produced better results compared to previous works, showing potential for solving genetic engineering problems. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  13. Global Topology Optimisation

    DTIC Science & Technology

    2016-10-31

    statistical physics. Sec. IV includes several examples of the application of the stochastic method, including matching of a shape to a fixed design, and...an important part of any future application of this method. Second, re-initialization of the level set can lead to small but significant movements of...of engineering design problems [6, 17]. However, many of the relevant applications involve non-convex optimisation problems with multiple locally

  14. A simulation and optimisation procedure to model daily suppression resource transfers during a fire season in Colorado

    Treesearch

    Yu Wei; Erin J. Belval; Matthew P. Thompson; Dave E. Calkin; Crystal S. Stonesifer

    2016-01-01

    Sharing fire engines and crews between fire suppression dispatch zones may help improve the utilisation of fire suppression resources. Using the Resource Ordering and Status System, the Predictive Services’ Fire Potential Outlooks and the Rocky Mountain Region Preparedness Levels from 2010 to 2013, we tested a simulation and optimisation procedure to transfer crews and...

  15. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    NASA Astrophysics Data System (ADS)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  16. Parallel auto-correlative statistics with VTK.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  17. Laser surface texturing of cast iron steel: dramatic edge burr reduction and high speed process optimisation for industrial production using DPSS picosecond lasers

    NASA Astrophysics Data System (ADS)

    Bruneel, David; Kearsley, Andrew; Karnakis, Dimitris

    2015-07-01

    In this work we present picosecond DPSS laser surface texturing optimisation of automotive grade cast iron steel. This application attracts great interest, particularly in the automotive industry, to reduce friction between moving piston parts in car engines, in order to decrease fuel consumption. This is accomplished by partially covering with swallow microgrooves the inner surface of a piston liner and is currently a production process adopting much longer pulse (microsecond) DPSS lasers. Lubricated interface conditions of moving parts require from the laser process to produce a very strictly controlled surface topography around the laser formed grooves, whose edge burr height must be lower than 100 nm. To achieve such a strict tolerance, laser machining of cast iron steel was investigated using an infrared DPSS picosecond laser (10ps duration) with an output power of 16W and a repetition rate of 200 kHz. The ultrashort laser is believed to provide a much better thermal management of the etching process. All studies presented here were performed on flat samples in ambient air but the process is transferrable to cylindrical geometry engine liners. We will show that reducing significantly the edge burr below an acceptable limit for lubricated engine production is possible using such lasers and remarkably the process window lies at very high irradiated fluences much higher that the single pulse ablation threshold. This detailed experimental work highlights the close relationship between the optimised laser irradiation conditions as well as the process strategy with the final size of the undesirable edge burrs. The optimised process conditions are compatible with an industrial production process and show the potential for removing extra post)processing steps (honing, etc) of cylinder liners on the manufacturing line saving time and cost.

  18. Multi-objective optimisation and decision-making of space station logistics strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Yue-he; Luo, Ya-zhong

    2016-10-01

    Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.

  19. Implementation of the multi-channel monolith reactor in an optimisation procedure for heterogeneous oxidation catalysts based on genetic algorithms.

    PubMed

    Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter

    2007-01-01

    A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.

  20. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.

  1. Genome and epigenome engineering CRISPR toolkit for in vivo modulation of cis-regulatory interactions and gene expression in the chicken embryo.

    PubMed

    Williams, Ruth M; Senanayake, Upeka; Artibani, Mara; Taylor, Gunes; Wells, Daniel; Ahmed, Ahmed Ashour; Sauka-Spengler, Tatjana

    2018-02-23

    CRISPR/Cas9 genome engineering has revolutionised all aspects of biological research, with epigenome engineering transforming gene regulation studies. Here, we present an optimised, adaptable toolkit enabling genome and epigenome engineering in the chicken embryo, and demonstrate its utility by probing gene regulatory interactions mediated by neural crest enhancers. First, we optimise novel efficient guide-RNA mini expression vectors utilising chick U6 promoters, provide a strategy for rapid somatic gene knockout and establish a protocol for evaluation of mutational penetrance by targeted next-generation sequencing. We show that CRISPR/Cas9-mediated disruption of transcription factors causes a reduction in their cognate enhancer-driven reporter activity. Next, we assess endogenous enhancer function using both enhancer deletion and nuclease-deficient Cas9 (dCas9) effector fusions to modulate enhancer chromatin landscape, thus providing the first report of epigenome engineering in a developing embryo. Finally, we use the synergistic activation mediator (SAM) system to activate an endogenous target promoter. The novel genome and epigenome engineering toolkit developed here enables manipulation of endogenous gene expression and enhancer activity in chicken embryos, facilitating high-resolution analysis of gene regulatory interactions in vivo . © 2018. Published by The Company of Biologists Ltd.

  2. Scalable descriptive and correlative statistics with Titan.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David C.; Pebay, Philippe Pierre

    This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.

  3. Optimising a modified free-space permittivity characterisation method for civil engineering applications

    NASA Astrophysics Data System (ADS)

    Muller, Wayne; Scheuermann, Alexander

    2016-04-01

    Measuring the electrical permittivity of civil engineering materials is important for a range of ground penetrating radar (GPR) and pavement moisture measurement applications. Compacted unbound granular (UBG) pavement materials present a number of preparation and measurement challenges using conventional characterisation techniques. As an alternative to these methods, a modified free-space (MFS) characterisation approach has previously been investigated. This paper describes recent work to optimise and validate the MFS technique. The research included finite difference time domain (FDTD) modelling to better understand the nature of wave propagation within material samples and the test apparatus. This research led to improvements in the test approach and optimisation of sample sizes. The influence of antenna spacing and sample thickness on the permittivity results was investigated by a series of experiments separating antennas and measuring samples of nylon and water. Permittivity measurements of samples of nylon and water approximately 100 mm and 170 mm thick were also compared, showing consistent results. These measurements also agreed well with surface probe measurements of the nylon sample and literature values for water. The results indicate permittivity estimates of acceptable accuracy can be obtained using the proposed approach, apparatus and sample sizes.

  4. A novel artificial immune clonal selection classification and rule mining with swarm learning model

    NASA Astrophysics Data System (ADS)

    Al-Sheshtawi, Khaled A.; Abdul-Kader, Hatem M.; Elsisi, Ashraf B.

    2013-06-01

    Metaheuristic optimisation algorithms have become popular choice for solving complex problems. By integrating Artificial Immune clonal selection algorithm (CSA) and particle swarm optimisation (PSO) algorithm, a novel hybrid Clonal Selection Classification and Rule Mining with Swarm Learning Algorithm (CS2) is proposed. The main goal of the approach is to exploit and explore the parallel computation merit of Clonal Selection and the speed and self-organisation merits of Particle Swarm by sharing information between clonal selection population and particle swarm. Hence, we employed the advantages of PSO to improve the mutation mechanism of the artificial immune CSA and to mine classification rules within datasets. Consequently, our proposed algorithm required less training time and memory cells in comparison to other AIS algorithms. In this paper, classification rule mining has been modelled as a miltiobjective optimisation problem with predictive accuracy. The multiobjective approach is intended to allow the PSO algorithm to return an approximation to the accuracy and comprehensibility border, containing solutions that are spread across the border. We compared our proposed algorithm classification accuracy CS2 with five commonly used CSAs, namely: AIRS1, AIRS2, AIRS-Parallel, CLONALG, and CSCA using eight benchmark datasets. We also compared our proposed algorithm classification accuracy CS2 with other five methods, namely: Naïve Bayes, SVM, MLP, CART, and RFB. The results show that the proposed algorithm is comparable to the 10 studied algorithms. As a result, the hybridisation, built of CSA and PSO, can develop respective merit, compensate opponent defect, and make search-optimal effect and speed better.

  5. On the dynamic rounding-off in analogue and RF optimal circuit sizing

    NASA Astrophysics Data System (ADS)

    Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena

    2014-04-01

    Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.

  6. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  7. Operational modes, health, and status monitoring

    NASA Astrophysics Data System (ADS)

    Taljaard, Corrie

    2016-08-01

    System Engineers must fully understand the system, its support system and operational environment to optimise the design. Operations and Support Managers must also identify the correct metrics to measure the performance and to manage the operations and support organisation. Reliability Engineering and Support Analysis provide methods to design a Support System and to optimise the Availability of a complex system. Availability modelling and Failure Analysis during the design is intended to influence the design and to develop an optimum maintenance plan for a system. The remote site locations of the SKA Telescopes place emphasis on availability, failure identification and fault isolation. This paper discusses the use of Failure Analysis and a Support Database to design a Support and Maintenance plan for the SKA Telescopes. It also describes the use of modelling to develop an availability dashboard and performance metrics.

  8. Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang

    2016-01-01

    The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.

  9. PWHATSHAP: efficient haplotyping for future generation sequencing.

    PubMed

    Bracciali, Andrea; Aldinucci, Marco; Patterson, Murray; Marschall, Tobias; Pisanti, Nadia; Merelli, Ivan; Torquati, Massimo

    2016-09-22

    Haplotype phasing is an important problem in the analysis of genomics information. Given a set of DNA fragments of an individual, it consists of determining which one of the possible alleles (alternative forms of a gene) each fragment comes from. Haplotype information is relevant to gene regulation, epigenetics, genome-wide association studies, evolutionary and population studies, and the study of mutations. Haplotyping is currently addressed as an optimisation problem aiming at solutions that minimise, for instance, error correction costs, where costs are a measure of the confidence in the accuracy of the information acquired from DNA sequencing. Solutions have typically an exponential computational complexity. WHATSHAP is a recent optimal approach which moves computational complexity from DNA fragment length to fragment overlap, i.e., coverage, and is hence of particular interest when considering sequencing technology's current trends that are producing longer fragments. Given the potential relevance of efficient haplotyping in several analysis pipelines, we have designed and engineered PWHATSHAP, a parallel, high-performance version of WHATSHAP. PWHATSHAP is embedded in a toolkit developed in Python and supports genomics datasets in standard file formats. Building on WHATSHAP, PWHATSHAP exhibits the same complexity exploring a number of possible solutions which is exponential in the coverage of the dataset. The parallel implementation on multi-core architectures allows for a relevant reduction of the execution time for haplotyping, while the provided results enjoy the same high accuracy as that provided by WHATSHAP, which increases with coverage. Due to its structure and management of the large datasets, the parallelisation of WHATSHAP posed demanding technical challenges, which have been addressed exploiting a high-level parallel programming framework. The result, PWHATSHAP, is a freely available toolkit that improves the efficiency of the analysis of genomics information.

  10. Pure random search for ambient sensor distribution optimisation in a smart home environment.

    PubMed

    Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming

    2011-01-01

    Smart homes are living spaces facilitated with technology to allow individuals to remain in their own homes for longer, rather than be institutionalised. Sensors are the fundamental physical layer with any smart home, as the data they generate is used to inform decision support systems, facilitating appropriate actuator actions. Positioning of sensors is therefore a fundamental characteristic of a smart home. Contemporary smart home sensor distribution is aligned to either a) a total coverage approach; b) a human assessment approach. These methods for sensor arrangement are not data driven strategies, are unempirical and frequently irrational. This Study hypothesised that sensor deployment directed by an optimisation method that utilises inhabitants' spatial frequency data as the search space, would produce more optimal sensor distributions vs. the current method of sensor deployment by engineers. Seven human engineers were tasked to create sensor distributions based on perceived utility for 9 deployment scenarios. A Pure Random Search (PRS) algorithm was then tasked to create matched sensor distributions. The PRS method produced superior distributions in 98.4% of test cases (n=64) against human engineer instructed deployments when the engineers had no access to the spatial frequency data, and in 92.0% of test cases (n=64) when engineers had full access to these data. These results thus confirmed the hypothesis.

  11. A general method to eliminate laboratory induced recombinants during massive, parallel sequencing of cDNA library.

    PubMed

    Waugh, Caryll; Cromer, Deborah; Grimm, Andrew; Chopra, Abha; Mallal, Simon; Davenport, Miles; Mak, Johnson

    2015-04-09

    Massive, parallel sequencing is a potent tool for dissecting the regulation of biological processes by revealing the dynamics of the cellular RNA profile under different conditions. Similarly, massive, parallel sequencing can be used to reveal the complexity of viral quasispecies that are often found in the RNA virus infected host. However, the production of cDNA libraries for next-generation sequencing (NGS) necessitates the reverse transcription of RNA into cDNA and the amplification of the cDNA template using PCR, which may introduce artefact in the form of phantom nucleic acids species that can bias the composition and interpretation of original RNA profiles. Using HIV as a model we have characterised the major sources of error during the conversion of viral RNA to cDNA, namely excess RNA template and the RNaseH activity of the polymerase enzyme, reverse transcriptase. In addition we have analysed the effect of PCR cycle on detection of recombinants and assessed the contribution of transfection of highly similar plasmid DNA to the formation of recombinant species during the production of our control viruses. We have identified RNA template concentrations, RNaseH activity of reverse transcriptase, and PCR conditions as key parameters that must be carefully optimised to minimise chimeric artefacts. Using our optimised RT-PCR conditions, in combination with our modified PCR amplification procedure, we have developed a reliable technique for accurate determination of RNA species using NGS technology.

  12. Multi-objective ACO algorithms to minimise the makespan and the total rejection cost on BPMs with arbitrary job weights

    NASA Astrophysics Data System (ADS)

    Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.

    2017-12-01

    In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.

  13. Method and apparatus of parallel computing with simultaneously operating stream prefetching and list prefetching engines

    DOEpatents

    Boyle, Peter A.; Christ, Norman H.; Gara, Alan; Mawhinney, Robert D.; Ohmacht, Martin; Sugavanam, Krishnan

    2012-12-11

    A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command.

  14. Optimising the Design of Land Force C2 Architectures

    DTIC Science & Technology

    2002-06-13

    5624 Email: brendan.kirby@dsto.defence.gov.au and David Cropley, Systems Engineering & Evaluation Centre University of South Australia, Mawson Lakes...Campus, Mawson Lakes, SA 5095, Australia. Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...Australia. Email: brendan.kirby@dsto.defence.gov.au *Systems Engineering & Evaluation Centre, University of South Australia, Mawson Lakes Campus, Mawson

  15. Airfoil Shape Optimization based on Surrogate Model

    NASA Astrophysics Data System (ADS)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  16. Performance assessment and optimisation of a large information system by combined customer relationship management and resilience engineering: a mathematical programming approach

    NASA Astrophysics Data System (ADS)

    Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.

    2017-10-01

    ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.

  17. Global reaction mechanism for the auto-ignition of full boiling range gasoline and kerosene fuels

    NASA Astrophysics Data System (ADS)

    Vandersickel, A.; Wright, Y. M.; Boulouchos, K.

    2013-12-01

    Compact reaction schemes capable of predicting auto-ignition are a prerequisite for the development of strategies to control and optimise homogeneous charge compression ignition (HCCI) engines. In particular for full boiling range fuels exhibiting two stage ignition a tremendous demand exists in the engine development community. The present paper therefore meticulously assesses a previous 7-step reaction scheme developed to predict auto-ignition for four hydrocarbon blends and proposes an important extension of the model constant optimisation procedure, allowing for the model to capture not only ignition delays, but also the evolutions of representative intermediates and heat release rates for a variety of full boiling range fuels. Additionally, an extensive validation of the later evolutions by means of various detailed n-heptane reaction mechanisms from literature has been presented; both for perfectly homogeneous, as well as non-premixed/stratified HCCI conditions. Finally, the models potential to simulate the auto-ignition of various full boiling range fuels is demonstrated by means of experimental shock tube data for six strongly differing fuels, containing e.g. up to 46.7% cyclo-alkanes, 20% napthalenes or complex branched aromatics such as methyl- or ethyl-napthalene. The good predictive capability observed for each of the validation cases as well as the successful parameterisation for each of the six fuels, indicate that the model could, in principle, be applied to any hydrocarbon fuel, providing suitable adjustments to the model parameters are carried out. Combined with the optimisation strategy presented, the model therefore constitutes a major step towards the inclusion of real fuel kinetics into full scale HCCI engine simulations.

  18. Support for non-locking parallel reception of packets belonging to a single memory reception FIFO

    DOEpatents

    Chen, Dong [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Salapura, Valentina [Yorktown Heights, NY; Senger, Robert M [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugawara, Yutaka [Yorktown Heights, NY

    2011-01-27

    A method and apparatus for distributed parallel messaging in a parallel computing system. A plurality of DMA engine units are configured in a multiprocessor system to operate in parallel, one DMA engine unit for transferring a current packet received at a network reception queue to a memory location in a memory FIFO (rmFIFO) region of a memory. A control unit implements logic to determine whether any prior received packet destined for that rmFIFO is still in a process of being stored in the associated memory by another DMA engine unit of the plurality, and prevent the one DMA engine unit from indicating completion of storing the current received packet in the reception memory FIFO (rmFIFO) until all prior received packets destined for that rmFIFO are completely stored by the other DMA engine units. Thus, there is provided non-locking support so that multiple packets destined for a single rmFIFO are transferred and stored in parallel to predetermined locations in a memory.

  19. Topology optimisation of micro fluidic mixers considering fluid-structure interactions with a coupled Lattice Boltzmann algorithm

    NASA Astrophysics Data System (ADS)

    Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.

    2017-11-01

    Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.

  20. Photonic simulation of entanglement growth and engineering after a spin chain quench.

    PubMed

    Pitsios, Ioannis; Banchi, Leonardo; Rab, Adil S; Bentivegna, Marco; Caprara, Debora; Crespi, Andrea; Spagnolo, Nicolò; Bose, Sougato; Mataloni, Paolo; Osellame, Roberto; Sciarrino, Fabio

    2017-11-17

    The time evolution of quantum many-body systems is one of the most important processes for benchmarking quantum simulators. The most curious feature of such dynamics is the growth of quantum entanglement to an amount proportional to the system size (volume law) even when interactions are local. This phenomenon has great ramifications for fundamental aspects, while its optimisation clearly has an impact on technology (e.g., for on-chip quantum networking). Here we use an integrated photonic chip with a circuit-based approach to simulate the dynamics of a spin chain and maximise the entanglement generation. The resulting entanglement is certified by constructing a second chip, which measures the entanglement between multiple distant pairs of simulated spins, as well as the block entanglement entropy. This is the first photonic simulation and optimisation of the extensive growth of entanglement in a spin chain, and opens up the use of photonic circuits for optimising quantum devices.

  1. Convex optimisation approach to constrained fuel optimal control of spacecraft in close relative motion

    NASA Astrophysics Data System (ADS)

    Massioni, Paolo; Massari, Mauro

    2018-05-01

    This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.

  2. A domain specific language for performance portable molecular dynamics algorithms

    NASA Astrophysics Data System (ADS)

    Saunders, William Robert; Grant, James; Müller, Eike Hermann

    2018-03-01

    Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.

  3. Distributed parallel messaging for multiprocessor systems

    DOEpatents

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  4. The Use of Mathematical Modelling for Improving the Tissue Engineering of Organs and Stem Cell Therapy.

    PubMed

    Lemon, Greg; Sjoqvist, Sebastian; Lim, Mei Ling; Feliu, Neus; Firsova, Alexandra B; Amin, Risul; Gustafsson, Ylva; Stuewer, Annika; Gubareva, Elena; Haag, Johannes; Jungebluth, Philipp; Macchiarini, Paolo

    2016-01-01

    Regenerative medicine is a multidisciplinary field where continued progress relies on the incorporation of a diverse set of technologies from a wide range of disciplines within medicine, science and engineering. This review describes how one such technique, mathematical modelling, can be utilised to improve the tissue engineering of organs and stem cell therapy. Several case studies, taken from research carried out by our group, ACTREM, demonstrate the utility of mechanistic mathematical models to help aid the design and optimisation of protocols in regenerative medicine.

  5. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  6. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    NASA Astrophysics Data System (ADS)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  7. The Mercury System: Embedding Computation into Disk Drives

    DTIC Science & Technology

    2004-08-20

    enabling technologies to build extremely fast data search engines . We do this by moving the search closer to the data, and performing it in hardware...engine searches in parallel across a disk or disk surface 2. System Parallelism: Searching is off-loaded to search engines and main processor can

  8. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  9. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-05

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  10. A centre-free approach for resource allocation with lower bounds

    NASA Astrophysics Data System (ADS)

    Obando, Germán; Quijano, Nicanor; Rakoto-Ravalontsalama, Naly

    2017-09-01

    Since complexity and scale of systems are continuously increasing, there is a growing interest in developing distributed algorithms that are capable to address information constraints, specially for solving optimisation and decision-making problems. In this paper, we propose a novel method to solve distributed resource allocation problems that include lower bound constraints. The optimisation process is carried out by a set of agents that use a communication network to coordinate their decisions. Convergence and optimality of the method are guaranteed under some mild assumptions related to the convexity of the problem and the connectivity of the underlying graph. Finally, we compare our approach with other techniques reported in the literature, and we present some engineering applications.

  11. Dual compile strategy for parallel heterogeneous execution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Tyler Barratt; Perry, James Thomas

    2012-06-01

    The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work atmore » the same time.« less

  12. Practical Application of Finite Element Analysis to Aircraft Structural Design

    DTIC Science & Technology

    1986-08-01

    at the design stage AEROELASTICITE ET OPTIMISATION EN AVANT-PROJET (AA)PETIAU, C; (AB) BOUTIN , D. Avions Marcel Dassault-Breguet Aviation, Saint...Interscience, 1981, p. 431-443. 810000 p. 13 refs 8 In: EN (English) p. 2018 The design complexity and size of convectively-cooled engine and airframe

  13. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  14. Stably engineered nanobubbles and ultrasound - An effective platform for enhanced macromolecular delivery to representative cells of the retina.

    PubMed

    Thakur, Sachin S; Ward, Micheal S; Popat, Amirali; Flemming, Nicole B; Parat, Marie-Odile; Barnett, Nigel L; Parekh, Harendra S

    2017-01-01

    Herein we showcase the potential of ultrasound-responsive nanobubbles in enhancing macromolecular permeation through layers of the retina, ultimately leading to significant and direct intracellular delivery; this being effectively demonstrated across three relevant and distinct retinal cell lines. Stably engineered nanobubbles of a highly homogenous and echogenic nature were fully characterised using dynamic light scattering, B-scan ultrasound and transmission electron microscopy (TEM). The nanobubbles appeared as spherical liposome-like structures under TEM, accompanied by an opaque luminal core and darkened corona around their periphery, with both features indicative of efficient gas entrapment and adsorption, respectively. A nanobubble +/- ultrasound sweeping study was conducted next, which determined the maximum tolerated dose for each cell line. Detection of underlying cellular stress was verified using the biomarker heat shock protein 70, measured before and after treatment with optimised ultrasound. Next, with safety to nanobubbles and optimised ultrasound demonstrated, each human or mouse-derived cell population was incubated with biotinylated rabbit-IgG in the presence and absence of ultrasound +/- nanobubbles. Intracellular delivery of antibody in each cell type was then quantified using Cy3-streptavidin. Nanobubbles and optimised ultrasound were found to be negligibly toxic across all cell lines tested. Macromolecular internalisation was achieved to significant, yet varying degrees in all three cell lines. The results of this study pave the way towards better understanding mechanisms underlying cellular responsiveness to ultrasound-triggered drug delivery in future ex vivo and in vivo models of the posterior eye.

  15. Stably engineered nanobubbles and ultrasound - An effective platform for enhanced macromolecular delivery to representative cells of the retina

    PubMed Central

    Thakur, Sachin S.; Ward, Micheal S.; Popat, Amirali; Flemming, Nicole B.; Parat, Marie-Odile; Barnett, Nigel L.

    2017-01-01

    Herein we showcase the potential of ultrasound-responsive nanobubbles in enhancing macromolecular permeation through layers of the retina, ultimately leading to significant and direct intracellular delivery; this being effectively demonstrated across three relevant and distinct retinal cell lines. Stably engineered nanobubbles of a highly homogenous and echogenic nature were fully characterised using dynamic light scattering, B-scan ultrasound and transmission electron microscopy (TEM). The nanobubbles appeared as spherical liposome-like structures under TEM, accompanied by an opaque luminal core and darkened corona around their periphery, with both features indicative of efficient gas entrapment and adsorption, respectively. A nanobubble +/- ultrasound sweeping study was conducted next, which determined the maximum tolerated dose for each cell line. Detection of underlying cellular stress was verified using the biomarker heat shock protein 70, measured before and after treatment with optimised ultrasound. Next, with safety to nanobubbles and optimised ultrasound demonstrated, each human or mouse-derived cell population was incubated with biotinylated rabbit-IgG in the presence and absence of ultrasound +/- nanobubbles. Intracellular delivery of antibody in each cell type was then quantified using Cy3-streptavidin. Nanobubbles and optimised ultrasound were found to be negligibly toxic across all cell lines tested. Macromolecular internalisation was achieved to significant, yet varying degrees in all three cell lines. The results of this study pave the way towards better understanding mechanisms underlying cellular responsiveness to ultrasound-triggered drug delivery in future ex vivo and in vivo models of the posterior eye. PMID:28542473

  16. Efficient characterisation of large deviations using population dynamics

    NASA Astrophysics Data System (ADS)

    Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.

    2018-05-01

    We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.

  17. A Pilot Study of the Epistemological Beliefs of Students in Industrial-Technical Fields

    ERIC Educational Resources Information Center

    Zinn, Bernd

    2012-01-01

    An investigation of the epistemological beliefs of apprentices in the commercial engineering sector is of interest for vocational training, both from the point of view of optimising vocational didactic processes as well as in terms of communicating suitable knowledge based beliefs about principles and performance in the commercial engineering…

  18. Numerical Optimisation in Non Reacting Conditions of the Injector Geometry for a Continuous Detonation Wave Rocket Engine

    NASA Astrophysics Data System (ADS)

    Gaillard, T.; Davidenko, D.; Dupoirieux, F.

    2015-06-01

    The paper presents the methodology and the results of a numerical study, which is aimed at the investigation and optimisation of different means of fuel and oxidizer injection adapted to rocket engines operating in the rotating detonation mode. As the simulations are achieved at the local scale of a single injection element, only one periodic pattern of the whole geometry can be calculated so that the travelling detonation waves and the associated chemical reactions can not be taken into account. Here, separate injection of fuel and oxidizer is considered because premixed injection is handicapped by the risk of upstream propagation of the detonation wave. Different associations of geometrical periodicity and symmetry are investigated for the injection elements distributed over the injector head. To analyse the injection and mixing processes, a nonreacting 3D flow is simulated using the LES approach. Performance of the studied configurations is analysed using the results on instantaneous and mean flowfields as well as by comparing the mixing efficiency and the total pressure recovery evaluated for different configurations.

  19. Weight optimization of plane truss using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Neeraja, D.; Kamireddy, Thejesh; Santosh Kumar, Potnuru; Simha Reddy, Vijay

    2017-11-01

    Optimization of structure on basis of weight has many practical benefits in every engineering field. The efficiency is proportionally related to its weight and hence weight optimization gains prime importance. Considering the field of civil engineering, weight optimized structural elements are economical and easier to transport to the site. In this study, genetic optimization algorithm for weight optimization of steel truss considering its shape, size and topology aspects has been developed in MATLAB. Material strength and Buckling stability have been adopted from IS 800-2007 code of construction steel. The constraints considered in the present study are fabrication, basic nodes, displacements, and compatibility. Genetic programming is a natural selection search technique intended to combine good solutions to a problem from many generations to improve the results. All solutions are generated randomly and represented individually by a binary string with similarities of natural chromosomes, and hence it is termed as genetic programming. The outcome of the study is a MATLAB program, which can optimise a steel truss and display the optimised topology along with element shapes, deflections, and stress results.

  20. Optimisation of warpage on plastic injection moulding part using response surface methodology (RSM) and genetic algorithm method (GA)

    NASA Astrophysics Data System (ADS)

    Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.

  1. Evidence of the factors that influence the utilisation of Kangaroo Mother Care by parents with low-birth-weight infants in low- and middle-income countries (LMICs): a scoping review protocol.

    PubMed

    Mathias, Christina T; Mianda, Solange; Ginindza, Themba G

    2018-04-05

    The Sustainable Development Goal (SDG) 3 emphasises on reducing neonatal deaths caused by low birth weight (LBW) complications by the implementation and utilisation of Kangaroo Mother Care (KMC) in low- and middle-income countries (LMICs). Despite the empirical evidence of KMC optimising low-birth-weight infants' (LBWIs') survival, its advantages and the LMICs implementing the service, studies have shown that LBW infant deaths occurring in LMICs are largely contributing to global child mortality. The aim of this scoping review is to map out the literature on barriers, challenges and facilitators of KMC utilisation by parents with LBWIs. This scoping review will use Endnote X7 reference management software to manage articles. The review search strategy will use SCIELO and LILACS databases. Other databases will be used via EBSCOHost search engine and these are Academic search complete, CINAHL with full text, Education source, Health source: Nursing/Academic Edition, Medline with full text and Medline. We will also use Google Scholar, JSTOR, Open grey search engines and reference lists. A two-phase search mapping out process will be done. In phase 1, one reviewer will perform the title screening and removal of duplicates. Two reviewers will do a parallel abstract screening according to eligibility criteria. Phase 2 will involve the reading of full articles and exclusion of articles, in accordance with the eligibility criteria. Data extraction from the articles will be done by two reviewers independently and parallel to the data extraction form. The data quality assessment of the eligible studies will be done using the Mixed Method Appraisal Tool (MMAT). The extraction of the synthesised results and thematic content analysis of the studies will be done by NVIVO version 10. We expect to find studies on barriers, challenges and facilitating factors of KMC utilisation by parents with LBWIs in LMICs. The review outcomes will guide future research and practice and inform policy. The findings will be disseminated in print, electronic and conference presentations related to maternal child and neonatal health.

  2. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  3. The Tungsten Inert GAS (TIG) Process of Welding Aluminium in Microgravity: Technical and Economic Considerations

    NASA Astrophysics Data System (ADS)

    Ferretti, S.; Amadori, K.; Boccalatte, A.; Alessandrini, M.; Freddi, A.; Persiani, F.; Poli, G.

    2002-01-01

    The UNIBO team composed of students and professors of the University of Bologna along with technicians and engineers from Alenia Space Division and Siad Italargon Division, took part in the 3rd Student Parabolic Flight Campaign of the European Space Agency in 2000. It won the student competition and went on to take part in the Professional Parabolic Flight Campaign of May 2001. The experiment focused on "dendritic growth in aluminium alloy weldings", and investigated topics related to the welding process of aluminium in microgravity. The purpose of the research is to optimise the process and to define the areas of interest that could be improved by new conceptual designs. The team performed accurate tests in microgravity to determine which phenomena have the greatest impact on the quality of the weldings with respect to penetration, surface roughness and the microstructures that are formed during the solidification. Various parameters were considered in the economic-technical optimisation, such as the type of electrode and its tip angle. Ground and space tests have determined the optimum chemical composition of the electrodes to offer longest life while maintaining the shape of the point. Additionally, the power consumption has been optimised; this offers opportunities for promoting the product to the customer as well as being environmentally friendly. Tests performed on the Al-Li alloys showed a significant influence of some physical phenomena such as the Marangoni effect and thermal diffusion; predictions have been made on the basis of observations of the thermal flux seen in the stereophotos. Space transportation today is a key element in the construction of space stations and future planetary bases, because the volumes available for launch to space are directly related to the payload capacity of rockets or the Space Shuttle. The research performed gives engineers the opportunity to consider completely new concepts for designing structures for space applications. In fact, once the optimised parameters are defined for welding in space, it could be possible to weld different parts directly in orbit to obtain much larger sizes and volumes, for example for space tourism habitation modules. The second relevant aspect is technology transfer obtained by the optimisation of the TIG process on aluminium which is often used in the automotive industry as well as in mass production markets.

  4. The engine design engine. A clustered computer platform for the aerodynamic inverse design and analysis of a full engine

    NASA Technical Reports Server (NTRS)

    Sanz, J.; Pischel, K.; Hubler, D.

    1992-01-01

    An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.

  5. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    NASA Astrophysics Data System (ADS)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  6. On the conversion of infrared radiation from fission reactor-based photon engine into parallel beam

    NASA Astrophysics Data System (ADS)

    Gulevich, Andrey V.; Levchenko, Vladislav E.; Loginov, Nicolay I.; Kukharchuk, Oleg F.; Evtodiev, Denis A.; Zrodnikov, Anatoly V.

    2002-01-01

    The efficiency of infrared radiation conversion from photon engine based on fission reactor into parallel photon beam is discussed. Two different ways of doing that are considered. One of them is to use the parabolic mirror to convert of infrared radiation into parallel photon beam. The another one is based on the use of special lattice consisting of numerous light conductors. The experimental facility and some results are described. .

  7. Programmable DNA switches and their applications.

    PubMed

    Harroun, Scott G; Prévost-Tremblay, Carl; Lauzon, Dominic; Desrosiers, Arnaud; Wang, Xiaomeng; Pedro, Liliana; Vallée-Bélisle, Alexis

    2018-03-08

    DNA switches are ideally suited for numerous nanotechnological applications, and increasing efforts are being directed toward their engineering. In this review, we discuss how to engineer these switches starting from the selection of a specific DNA-based recognition element, to its adaptation and optimisation into a switch, with applications ranging from sensing to drug delivery, smart materials, molecular transporters, logic gates and others. We provide many examples showcasing their high programmability and recent advances towards their real life applications. We conclude with a short perspective on this exciting emerging field.

  8. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    PubMed

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Parallel Hybrid Gas-Electric Geared Turbofan Engine Conceptual Design and Benefits Analysis

    NASA Technical Reports Server (NTRS)

    Lents, Charles; Hardin, Larry; Rheaume, Jonathan; Kohlman, Lee

    2016-01-01

    The conceptual design of a parallel gas-electric hybrid propulsion system for a conventional single aisle twin engine tube and wing vehicle has been developed. The study baseline vehicle and engine technology are discussed, followed by results of the hybrid propulsion system sizing and performance analysis. The weights analysis for the electric energy storage & conversion system and thermal management system is described. Finally, the potential system benefits are assessed.

  10. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using the new Fern library (https://github.com/geoneric/fern/), an independent generic raster processing library. Fern is a highly generic software library and its algorithms can be configured according to the configuration of a modelling framework. With manageable programming effort (e.g. matching data types between programming and domain language) we created a binding between Fern and PCRaster. The resulting PCRaster Python multicore module can be used to execute existing PCRaster models without having to make any changes to the model code. We show initial results on synthetic and geoscientific models indicating significant runtime improvements provided by parallel local and focal operations. We further outline challenges in improving remaining algorithms such as flow operations over digital elevation maps and further potential improvements like enhancing disk I/O.

  11. Self-management in palliative medicine.

    PubMed

    Davidson, Isobel; Whyte, Fiona; Richardson, Rosemary

    2012-12-01

    Self-management in the palliative care domain means equipping patients and carers to manage medical aspects of illness, managing life roles and allowing adaptation to the changing dynamics brought on by illness and its progression. As well as dealing with the psychological consequences of living with a life-threatening illness in which the aim is to optimise living. This review will consider the rationale for developing and adopting self-management as a model of care. Health policy currently advocates de-investment in traditional approaches to patient management paralleled with a re-engineering of services towards approaches required to underpin self-management care. However, the literature suggests that patients lack a fundamental knowledge and more importantly an understanding of the progression of their illness or what palliative of hospice care is. As a first step, this issue must be addressed in any self-management intervention. In terms of outcomes evidence continues to emerge that when compared with care self-management imparts sustainable understanding in targeted areas and has the potential to create a preventive spend environment. The role of self-management in palliative care requires further elucidation yet based on the evidence which is predominately gleaned from long-term conditions it would seem sensible if not ethical to educate patients/carers to actively be involved in decision making.

  12. High-End Concept Based on Hypersonic Two-Stage Rocket and Electro-Magnetic Railgun to Launch Micro-Satellites Into Low-Earth

    NASA Astrophysics Data System (ADS)

    Bozic, O.; Longo, J. M.; Giese, P.; Behren, J.

    2005-02-01

    The electromagnetic railgun technology appears to be an interesting alternative to launch small payloads into Low Earth Orbit (LEO), as this may introduce lower launch costs. A high-end solution, based upon present state of the art technology, has been investigated to derive the technical boundary conditions for the application of such a new system. This paper presents the main concept and the design aspects of such propelled projectile with special emphasis on flight mechanics, aero-/thermodynamics, materials and propulsion characteristics. Launch angles and trajectory optimisation analyses are carried out by means of 3 degree of freedom simulations (3DOF). The aerodynamic form of the projectile is optimised to provoke minimum drag and low heat loads. The surface temperature distribution for critical zones is calculated with DLR developed Navier-Stokes codes TAU, HOTSOSE, whereas the engineering tool HF3T is used for time dependent calculations of heat loads and temperatures on project surface and inner structures. Furthermore, competing propulsions systems are considered for the rocket engines of both stages. The structural mass is analysed mostly on the basis of carbon fibre reinforced materials as well as classical aerospace metallic materials. Finally, this paper gives a critical overview of the technical feasibility and cost of small rockets for such missions. Key words: micro-satellite, two-stage-rocket, railgun, rocket-engines, aero/thermodynamic, mass optimization

  13. Engineering design skills coverage in K-12 engineering program curriculum materials in the USA

    NASA Astrophysics Data System (ADS)

    Chabalengula, Vivien M.; Mumba, Frackson

    2017-11-01

    The current K-12 Science Education framework and Next Generation Science Standards (NGSS) in the United States emphasise the integration of engineering design in science instruction to promote scientific literacy and engineering design skills among students. As such, many engineering education programmes have developed curriculum materials that are being used in K-12 settings. However, little is known about the nature and extent to which engineering design skills outlined in NGSS are addressed in these K-12 engineering education programme curriculum materials. We analysed nine K-12 engineering education programmes for the nature and extent of engineering design skills coverage. Results show that developing possible solutions and actual designing of prototypes were the highly covered engineering design skills; specification of clear goals, criteria, and constraints received medium coverage; defining and identifying an engineering problem; optimising the design solution; and demonstrating how a prototype works, and making iterations to improve designs were lowly covered. These trends were similar across grade levels and across discipline-specific curriculum materials. These results have implications on engineering design-integrated science teaching and learning in K-12 settings.

  14. Flow of a Gas Turbine Engine Low-Pressure Subsystem Simulated

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1997-01-01

    The NASA Lewis Research Center is managing a task to numerically simulate overnight, on a parallel computing testbed, the aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The model solves the three-dimensional Navier- Stokes flow equations through all the components within the LPS, as well as the external flow around the engine nacelle. The LPS modeling task is being performed by Allison Engine Company under the Small Engine Technology contract. The large computer simulation was evaluated on networked computer systems using 8, 16, and 32 processors, with the parallel computing efficiency reaching 75 percent when 16 processors were used.

  15. An intelligent factory-wide optimal operation system for continuous production process

    NASA Astrophysics Data System (ADS)

    Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping

    2016-03-01

    In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.

  16. Microscale bioprocess optimisation.

    PubMed

    Micheletti, Martina; Lye, Gary J

    2006-12-01

    Microscale processing techniques offer the potential to speed up the delivery of new drugs to the market, reducing development costs and increasing patient benefit. These techniques have application across both the chemical and biopharmaceutical sectors. The approach involves the study of individual bioprocess operations at the microlitre scale using either microwell or microfluidic formats. In both cases the aim is to generate quantitative bioprocess information early on, so as to inform bioprocess design and speed translation to the manufacturing scale. Automation can enhance experimental throughput and will facilitate the parallel evaluation of competing biocatalyst and process options.

  17. Advanced propulsion system concept for hybrid vehicles

    NASA Technical Reports Server (NTRS)

    Bhate, S.; Chen, H.; Dochat, G.

    1980-01-01

    A series hybrid system, utilizing a free piston Stirling engine with a linear alternator, and a parallel hybrid system, incorporating a kinematic Stirling engine, are analyzed for various specified reference missions/vehicles ranging from a small two passenger commuter vehicle to a van. Parametric studies for each configuration, detail tradeoff studies to determine engine, battery and system definition, short term energy storage evaluation, and detail life cycle cost studies were performed. Results indicate that the selection of a parallel Stirling engine/electric, hybrid propulsion system can significantly reduce petroleum consumption by 70 percent over present conventional vehicles.

  18. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  19. Awakening sleeping beauty: production of propionic acid in Escherichia coli through the sbm operon requires the activity of a methylmalonyl-CoA epimerase.

    PubMed

    Gonzalez-Garcia, Ricardo Axayacatl; McCubbin, Tim; Wille, Annalena; Plan, Manuel; Nielsen, Lars Keld; Marcellin, Esteban

    2017-07-17

    Propionic acid is used primarily as a food preservative with smaller applications as a chemical building block for the production of many products including fabrics, cosmetics, drugs, and plastics. Biological production using propionibacteria would be competitive against chemical production through hydrocarboxylation of ethylene if native producers could be engineered to reach near-theoretical yield and good productivity. Unfortunately, engineering propionibacteria has proven very challenging. It has been suggested that activation of the sleeping beauty operon in Escherichia coli is sufficient to achieve propionic acid production. Optimising E. coli production should be much easier than engineering propionibacteria if tolerance issues can be addressed. Propionic acid is produced in E. coli via the sleeping beauty mutase operon under anaerobic conditions in rich medium via amino acid degradation. We observed that the sbm operon enhances amino acids degradation to propionic acid and allows E. coli to degrade isoleucine. However, we show here that the operon lacks an epimerase reaction that enables propionic acid production in minimal medium containing glucose as the sole carbon source. Production from glucose can be restored by engineering the system with a methylmalonyl-CoA epimerase from Propionibacterium acidipropionici (0.23 ± 0.02 mM). 1-Propanol production was also detected from the promiscuous activity of the native alcohol dehydrogenase (AdhE). We also show that aerobic conditions are favourable for propionic acid production. Finally, we increase titre 65 times using a combination of promoter engineering and process optimisation. The native sbm operon encodes an incomplete pathway. Production of propionic acid from glucose as sole carbon source is possible when the pathway is complemented with a methylmalonyl-CoA epimerase. Although propionic acid via the restored succinate dissimilation pathway is considered a fermentative process, the engineered pathway was shown to be functional under anaerobic and aerobic conditions.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, A.; Davis, A.; University of Wisconsin-Madison, Madison, WI 53706

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise tomore » extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)« less

  1. Parallel Algorithms for Groebner-Basis Reduction

    DTIC Science & Technology

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  2. Using CLIPS in the domain of knowledge-based massively parallel programming

    NASA Technical Reports Server (NTRS)

    Dvorak, Jiri J.

    1994-01-01

    The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.

  3. Lévy flight artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Sharma, Harish; Bansal, Jagdish Chand; Arya, K. V.; Yang, Xin-She

    2016-08-01

    Artificial bee colony (ABC) optimisation algorithm is a relatively simple and recent population-based probabilistic approach for global optimisation. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. In the ABC, there is a high chance to skip the true solution due to its large step sizes. In order to balance between diversity and convergence in the ABC, a Lévy flight inspired search strategy is proposed and integrated with ABC. The proposed strategy is named as Lévy Flight ABC (LFABC) has both the local and global search capability simultaneously and can be achieved by tuning the Lévy flight parameters and thus automatically tuning the step sizes. In the LFABC, new solutions are generated around the best solution and it helps to enhance the exploitation capability of ABC. Furthermore, to improve the exploration capability, the numbers of scout bees are increased. The experiments on 20 test problems of different complexities and five real-world engineering optimisation problems show that the proposed strategy outperforms the basic ABC and recent variants of ABC, namely, Gbest-guided ABC, best-so-far ABC and modified ABC in most of the experiments.

  4. Evaluation of French and English MeSH Indexing Systems with a Parallel Corpus

    PubMed Central

    Névéol, Aurélie; Mork, James G.; Aronson, Alan R.; Darmoni, Stefan J.

    2005-01-01

    Objective This paper presents the evaluation of two MeSH® indexing systems for French and English on a parallel corpus. Material and methods We describe two automatic MeSH indexing systems - MTI for English, and MAIF for French. The French version of the evaluation resources has been manually indexed with MeSH keyword/qualifier pairs. This professional indexing is used as our gold standard in the evaluation of both systems on keyword retrieval. Results The English system (MTI) obtains significantly better precision and recall (78% precision and 21% recall at rank 1, vs. 37%. precision and 6% recall for MAIF ). Moreover, the performance of both systems can be optimised by the breakage function used by the French system (MAIF), which selects an adaptive number of descriptors for each resource indexed. Conclusion MTI achieves better performance. However, both systems have features that can benefit each other. PMID:16779103

  5. Mining nutrigenetics patterns related to obesity: use of parallel multifactor dimensionality reduction.

    PubMed

    Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K

    2015-01-01

    This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.

  6. Using social media to facilitate knowledge transfer in complex engineering environments: a primer for educators

    NASA Astrophysics Data System (ADS)

    Murphy, Glen; Salomone, Sonia

    2013-03-01

    While highly cohesive groups are potentially advantageous they are also often correlated with the emergence of knowledge and information silos based around those same functional or occupational clusters. Consequently, an essential challenge for engineering organisations wishing to overcome informational silos is to implement mechanisms that facilitate, encourage and sustain interactions between otherwise disconnected groups. This paper acts as a primer for those seeking to gain an understanding of the design, functionality and utility of a suite of software tools generically termed social media technologies in the context of optimising the management of tacit engineering knowledge. Underpinned by knowledge management theory and using detailed case examples, this paper explores how social media technologies achieve such goals, allowing for the transfer of knowledge by tapping into the tacit and explicit knowledge of disparate groups in complex engineering environments.

  7. High-level ab initio studies of NO(X2Π)-O2(X3Σg -) van der Waals complexes in quartet states

    NASA Astrophysics Data System (ADS)

    Grein, Friedrich

    2018-05-01

    Geometry optimisations were performed on nine different structures of NO(X2Π)-O2(X3Σg-) van der Waals complexes in their quartet states, using the explicitly correlated RCCSD(T)-F12b method with basis sets up to the cc-pVQZ-F12 level. For the most stable configurations, counterpoise-corrected optimisations as well as extrapolations to the complete basis set (CBS) were performed. The X structure in the 4A‧ state was found to be most stable, with a CBS binding energy of -157 cm-1. The slipped tilted structures with N closer to O2 (Slipt-N), as well as the slipped parallel structure with O of NO closer to O2 (Slipp-O) in 4A″ states have binding energies of about -130 cm-1. C2v and linear complexes are less stable. According to calculated harmonic frequencies, the X isomer is bound. Isotropic hyperfine coupling constants of the complex are compared with those of the monomers.

  8. PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin Mohammed; Som, Sibendu

    This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less

  9. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  10. Engine-start Control Strategy of P2 Parallel Hybrid Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Xiangyang, Xu; Siqi, Zhao; Peng, Dong

    2017-12-01

    A smooth and fast engine-start process is important to parallel hybrid electric vehicles with an electric motor mounted in front of the transmission. However, there are some challenges during the engine-start control. Firstly, the electric motor must simultaneously provide a stable driving torque to ensure the drivability and a compensative torque to drag the engine before ignition. Secondly, engine-start time is a trade-off control objective because both fast start and smooth start have to be considered. To solve these problems, this paper first analyzed the resistance of the engine start process, and established a physic model in MATLAB/Simulink. Then a model-based coordinated control strategy among engine, motor and clutch was developed. Two basic control strategy during fast start and smooth start process were studied. Simulation results showed that the control objectives were realized by applying given control strategies, which can meet different requirement from the driver.

  11. A Divergence Statistics Extension to VTK for Performance Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical,more » "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.« less

  12. Recent advances in characterisation of subsonic axisymmetric nozzles

    NASA Astrophysics Data System (ADS)

    Tesař, Václav

    2018-06-01

    Nozzles are devices generating jets. They are widely used in fluidics and also in active control of flows past bodies. Being practically always a component of larger system, design and optimisation of the system needs characterisation of nozzle properties by an invariant quantity. Perhaps surprisingly, no suitable invariant has been so far introduced. This article surveys approaches to characterisation quantities and presents several examples of their typical use in systems such as parallel operation of two nozzles, matching a nozzle to its fluid supply source, apparent resistance increase in flows with pulsation, and the secondary invariants of a family of quasi-similar nozzles.

  13. Implementation and Assessment of a Virtual Laboratory of Parallel Robots Developed for Engineering Students

    ERIC Educational Resources Information Center

    Gil, Arturo; Peidró, Adrián; Reinoso, Óscar; Marín, José María

    2017-01-01

    This paper presents a tool, LABEL, oriented to the teaching of parallel robotics. The application, organized as a set of tools developed using Easy Java Simulations, enables the study of the kinematics of parallel robotics. A set of classical parallel structures was implemented such that LABEL can solve the inverse and direct kinematic problem of…

  14. Two-dimensional numerical simulation of a Stirling engine heat exchanger

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir; Tew, Roy C.; Dudenhoefer, James E.

    1989-01-01

    The first phase of an effort to develop multidimensional models of Stirling engine components is described. The ultimate goal is to model an entire engine working space. Parallel plate and tubular heat exchanger models are described, with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects). The model assumes laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations describing the model have been solved using the Crack-Nicloson finite-difference scheme. Model predictions are compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement is obtained for flow both in circular tubes and between parallel plates. The computational heat transfer results are in good agreement with the analytical heat transfer results for parallel plates.

  15. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    PubMed

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  16. Design of an integrated team project as bachelor thesis in bioscience engineering

    NASA Astrophysics Data System (ADS)

    Peeters, Marie-Christine; Londers, Elsje; Van der Hoeven, Wouter

    2014-11-01

    Following the decision at the KU Leuven to implement the educational concept of guided independent learning and to encourage students to participate in scientific research, the Faculty of Bioscience Engineering decided to introduce a bachelor thesis. Competencies, such as communication, scientific research and teamwork, need to be present in the design of this thesis. Because of the high number of students and the multidisciplinary nature of the graduates, all research divisions of the faculty are asked to participate. The yearly surveys and hearings were used for further optimisation. The actual design of this bachelor thesis is presented and discussed in this paper.

  17. Energy and wear optimisation of train longitudinal dynamics and of traction and braking systems

    NASA Astrophysics Data System (ADS)

    Conti, R.; Galardi, E.; Meli, E.; Nocciolini, D.; Pugi, L.; Rindi, A.

    2015-05-01

    Traction and braking systems deeply affect longitudinal train dynamics, especially when an extensive blending phase among different pneumatic, electric and magnetic devices is required. The energy and wear optimisation of longitudinal vehicle dynamics has a crucial economic impact and involves several engineering problems such as wear of braking friction components, energy efficiency, thermal load on components, level of safety under degraded or adhesion conditions (often constrained by the current regulation in force on signalling or other safety-related subsystem). In fact, the application of energy storage systems can lead to an efficiency improvement of at least 10% while, as regards the wear reduction, the improvement due to distributed traction systems and to optimised traction devices can be quantified in about 50%. In this work, an innovative integrated procedure is proposed by the authors to optimise longitudinal train dynamics and traction and braking manoeuvres in terms of both energy and wear. The new approach has been applied to existing test cases and validated with experimental data provided by Breda and, for some components and their homologation process, the results of experimental activities derive from cooperation performed with relevant industrial partners such as Trenitalia and Italcertifer. In particular, simulation results are referred to the simulation tests performed on a high-speed train (Ansaldo Breda Emu V250) and on a tram (Ansaldo Breda Sirio Tram). The proposed approach is based on a modular simulation platform in which the sub-models corresponding to different subsystems can be easily customised, depending on the considered application, on the availability of technical data and on the homologation process of different components.

  18. FPGA implementation of sparse matrix algorithm for information retrieval

    NASA Astrophysics Data System (ADS)

    Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio

    2005-06-01

    Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.

  19. Multi-terminal pipe routing by Steiner minimal tree and particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Wang, Chengen

    2012-08-01

    Computer-aided design of pipe routing is of fundamental importance for complex equipments' developments. In this article, non-rectilinear branch pipe routing with multiple terminals that can be formulated as a Euclidean Steiner Minimal Tree with Obstacles (ESMTO) problem is studied in the context of an aeroengine-integrated design engineering. Unlike the traditional methods that connect pipe terminals sequentially, this article presents a new branch pipe routing algorithm based on the Steiner tree theory. The article begins with a new algorithm for solving the ESMTO problem by using particle swarm optimisation (PSO), and then extends the method to the surface cases by using geodesics to meet the requirements of routing non-rectilinear pipes on the surfaces of aeroengines. Subsequently, the adaptive region strategy and the basic visibility graph method are adopted to increase the computation efficiency. Numeral computations show that the proposed routing algorithm can find satisfactory routing layouts while running in polynomial time.

  20. Optimised to Fail: Card Readers for Online Banking

    NASA Astrophysics Data System (ADS)

    Drimer, Saar; Murdoch, Steven J.; Anderson, Ross

    The Chip Authentication Programme (CAP) has been introduced by banks in Europe to deal with the soaring losses due to online banking fraud. A handheld reader is used together with the customer’s debit card to generate one-time codes for both login and transaction authentication. The CAP protocol is not public, and was rolled out without any public scrutiny. We reverse engineered the UK variant of card readers and smart cards and here provide the first public description of the protocol. We found numerous weaknesses that are due to design errors such as reusing authentication tokens, overloading data semantics, and failing to ensure freshness of responses. The overall strategic error was excessive optimisation. There are also policy implications. The move from signature to PIN for authorising point-of-sale transactions shifted liability from banks to customers; CAP introduces the same problem for online banking. It may also expose customers to physical harm.

  1. 3D Reconstruction of human bones based on dictionary learning.

    PubMed

    Zhang, Binkai; Wang, Xiang; Liang, Xiao; Zheng, Jinjin

    2017-11-01

    An effective method for reconstructing a 3D model of human bones from computed tomography (CT) image data based on dictionary learning is proposed. In this study, the dictionary comprises the vertices of triangular meshes, and the sparse coefficient matrix indicates the connectivity information. For better reconstruction performance, we proposed a balance coefficient between the approximation and regularisation terms and a method for optimisation. Moreover, we applied a local updating strategy and a mesh-optimisation method to update the dictionary and the sparse matrix, respectively. The two updating steps are iterated alternately until the objective function converges. Thus, a reconstructed mesh could be obtained with high accuracy and regularisation. The experimental results show that the proposed method has the potential to obtain high precision and high-quality triangular meshes for rapid prototyping, medical diagnosis, and tissue engineering. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Kinetics in the real world: linking molecules, processes, and systems.

    PubMed

    Kohse-Höinghaus, Katharina; Troe, Jürgen; Grabow, Jens-Uwe; Olzmann, Matthias; Friedrichs, Gernot; Hungenberg, Klaus-Dieter

    2018-04-25

    Unravelling elementary steps, reaction pathways, and kinetic mechanisms is key to understanding the behaviour of many real-world chemical systems that span from the troposphere or even interstellar media to engines and process reactors. Recent work in chemical kinetics provides detailed information on the reactive changes occurring in chemical systems, often on the atomic or molecular scale. The optimisation of practical processes, for instance in combustion, catalysis, battery technology, polymerisation, and nanoparticle production, can profit from a sound knowledge of the underlying fundamental chemical kinetics. Reaction mechanisms can combine information gained from theory and experiments to enable the predictive simulation and optimisation of the crucial process variables and influences on the system's behaviour that may be exploited for both monitoring and control. Chemical kinetics, as one of the pillars of Physical Chemistry, thus contributes importantly to understanding and describing natural environments and technical processes and is becoming increasingly relevant for interactions in and with the real world.

  3. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  4. Engineering Play: Exploring Associations with Executive Function, Mathematical Ability, and Spatial Ability in Preschool

    ERIC Educational Resources Information Center

    Gold, Zachary Samuel

    2017-01-01

    Engineering play is a new perspective on preschool education that views constructive play as an engineering design process that parallels the way engineers think and work when they develop engineered solutions to human problems (Bairaktarova, Evangelou, Bagiati, & Brophy, 2011). Early research from this perspective supports its use in framing…

  5. Advanced Reciprocating Engine Systems (ARES): Raising the Bar on Engine Technology with Increased Efficiency and Reduced Emissions, at Attractive Costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This is a fact sheet on the U.S. Department of Energy's (DOE) Advanced Reciprocating Engine Systems program (ARES), which is designed to promote separate, but parallel engine development between the major stationary, gaseous fueled engine manufacturers in the United States.

  6. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  7. Performance of parallel computation using CUDA for solving the one-dimensional elasticity equations

    NASA Astrophysics Data System (ADS)

    Darmawan, J. B. B.; Mungkasi, S.

    2017-01-01

    In this paper, we investigate the performance of parallel computation in solving the one-dimensional elasticity equations. Elasticity equations are usually implemented in engineering science. Solving these equations fast and efficiently is desired. Therefore, we propose the use of parallel computation. Our parallel computation uses CUDA of the NVIDIA. Our research results show that parallel computation using CUDA has a great advantage and is powerful when the computation is of large scale.

  8. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    NASA Astrophysics Data System (ADS)

    Dominique, Stephane

    The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  9. A bi-population based scheme for an explicit exploration/exploitation trade-off in dynamic environments

    NASA Astrophysics Data System (ADS)

    Ben-Romdhane, Hajer; Krichen, Saoussen; Alba, Enrique

    2017-05-01

    Optimisation in changing environments is a challenging research topic since many real-world problems are inherently dynamic. Inspired by the natural evolution process, evolutionary algorithms (EAs) are among the most successful and promising approaches that have addressed dynamic optimisation problems. However, managing the exploration/exploitation trade-off in EAs is still a prevalent issue, and this is due to the difficulties associated with the control and measurement of such a behaviour. The proposal of this paper is to achieve a balance between exploration and exploitation in an explicit manner. The idea is to use two equally sized populations: the first one performs exploration while the second one is responsible for exploitation. These tasks are alternated from one generation to the next one in a regular pattern, so as to obtain a balanced search engine. Besides, we reinforce the ability of our algorithm to quickly adapt after cnhanges by means of a memory of past solutions. Such a combination aims to restrain the premature convergence, to broaden the search area, and to speed up the optimisation. We show through computational experiments, and based on a series of dynamic problems and many performance measures, that our approach improves the performance of EAs and outperforms competing algorithms.

  10. Two-dimensional numerical simulation of a Stirling engine heat exchanger

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Tew, Roy C.; Dudenhoefer, James E.

    1989-01-01

    The first phase of an effort to develop multidimensional models of Stirling engine components is described; the ultimate goal is to model an entire engine working space. More specifically, parallel plate and tubular heat exchanger models with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects) are described. The model assumes: laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations, describing the model, were solved using Crank-Nicloson finite-difference scheme. Model predictions were compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement was obtained for the model predictions with analytical solutions available for both flow in circular tubes and between parallel plates. Also the heat transfer computational results are in good agreement with the heat transfer analytical results for parallel plates.

  11. Design, Optimization and Characterisation of Polymeric Microneedle Arrays Prepared by a Novel Laser-Based Micromoulding Technique

    PubMed Central

    Donnelly, Ryan F.; Majithiya, Rita; Singh, Thakur Raghu Raj; Morrow, Desmond I. J.; Garland, Martin J.; Demir, Yusuf K.; Migalska, Katarzyna; Ryan, Elizabeth; Gillen, David; Scott, Christopher J.; Woolfson, A. David

    2010-01-01

    Purpose Design and evaluation of a novel laser-based method for micromoulding of microneedle arrays from polymeric materials under ambient conditions. The aim of this study was to optimise polymeric composition and assess the performance of microneedle devices that possess different geometries. Methods A range of microneedle geometries was engineered into silicone micromoulds, and their physicochemical features were subsequently characterised. Results Microneedles micromoulded from 20% w/w aqueous blends of the mucoadhesive copolymer Gantrez® AN-139 were surprisingly found to possess superior physical strength than those produced from commonly used pharma polymers. Gantrez® AN-139 microneedles, 600 μm and 900 μm in height, penetrated neonatal porcine skin with low application forces (>0.03 N per microneedle). When theophylline was loaded into 600 μm microneedles, 83% of the incorporated drug was delivered across neonatal porcine skin over 24 h. Optical coherence tomography (OCT) showed that drug-free 600 μm Gantrez® AN-139 microneedles punctured the stratum corneum barrier of human skin in vivo and extended approximately 460 μm into the skin. However, the entirety of the microneedle lengths was not inserted. Conclusion In this study, we have shown that a novel laser engineering method can be used in micromoulding of polymeric microneedle arrays. We are currently carrying out an extensive OCT-informed study investigating the influence of microneedle array geometry on skin penetration depth, with a view to enhanced transdermal drug delivery from optimised laser-engineered Gantrez® AN-139 microneedles. PMID:20490627

  12. Design of a massively parallel computer using bit serial processing elements

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing

    1995-01-01

    A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.

  13. A parallel implementation of the network identification by multiple regression (NIR) algorithm to reverse-engineer regulatory gene networks.

    PubMed

    Gregoretti, Francesco; Belcastro, Vincenzo; di Bernardo, Diego; Oliva, Gennaro

    2010-04-21

    The reverse engineering of gene regulatory networks using gene expression profile data has become crucial to gain novel biological knowledge. Large amounts of data that need to be analyzed are currently being produced due to advances in microarray technologies. Using current reverse engineering algorithms to analyze large data sets can be very computational-intensive. These emerging computational requirements can be met using parallel computing techniques. It has been shown that the Network Identification by multiple Regression (NIR) algorithm performs better than the other ready-to-use reverse engineering software. However it cannot be used with large networks with thousands of nodes--as is the case in biological networks--due to the high time and space complexity. In this work we overcome this limitation by designing and developing a parallel version of the NIR algorithm. The new implementation of the algorithm reaches a very good accuracy even for large gene networks, improving our understanding of the gene regulatory networks that is crucial for a wide range of biomedical applications.

  14. 78 FR 56612 - Seagoing Barges

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ..., measured parallel to the centerline. \\2\\ Subchapters E (Load Lines), F (Marine Engineering), J (Electrical Engineering), N (Dangerous Cargoes), S (Subdivision and Stability), and W (Lifesaving Appliances and...

  15. Numerical Prediction of CCV in a PFI Engine using a Parallel LES Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M; Mirzaeian, Mohsen; Millo, Federico

    Cycle-to-cycle variability (CCV) is detrimental to IC engine operation and can lead to partial burn, misfire, and knock. Predicting CCV numerically is extremely challenging due to two key reasons. Firstly, high-fidelity methods such as large eddy simulation (LES) are required to accurately resolve the incylinder turbulent flowfield both spatially and temporally. Secondly, CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. Ameen et al. (Int. J. Eng. Res., 2017) developed a parallel perturbation model (PPM) approach to dissociate this long time-scale problem into several shorter timescale problems. The strategy ismore » to perform multiple single-cycle simulations in parallel by effectively perturbing the initial velocity field based on the intensity of the in-cylinder turbulence. This strategy was demonstrated for motored engine and it was shown that the mean and variance of the in-cylinder flowfield was captured reasonably well by this approach. In the present study, this PPM approach is extended to simulate the CCV in a fired port-fuel injected (PFI) SI engine. Two operating conditions are considered – a medium CCV operating case corresponding to 2500 rpm and 16 bar BMEP and a low CCV case corresponding to 4000 rpm and 12 bar BMEP. The predictions from this approach are also shown to be similar to the consecutive LES cycles. Both the consecutive and PPM LES cycles are observed to under-predict the variability in the early stage of combustion. The parallel approach slightly underpredicts the cyclic variability at all stages of combustion as compared to the consecutive LES cycles. However, it is shown that the parallel approach is able to predict the coefficient of variation (COV) of the in-cylinder pressure and burn rate related parameters with sufficient accuracy, and is also able to predict the qualitative trends in CCV with changing operating conditions. The convergence of the statistics predicted by the PPM approach with respect to the number of consecutive cycles required for each parallel simulation is also investigated. It is shown that this new approach is able to give accurate predictions of the CCV in fired engines in less than one-tenth of the time required for the conventional approach of simulating consecutive engine cycles.« less

  16. Parallel 3D Multi-Stage Simulation of a Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Turner, Mark G.; Topp, David A.

    1998-01-01

    A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.

  17. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    NASA Astrophysics Data System (ADS)

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm Optimisation (IPSO), Fully Informed Particle Swarm (FIPS), and weighted FIPS (wFIPS). Finally, an advanced sensitivity analysis using the Latin Hypercube One-At-a-Time (LH-OAT) method and user-friendly plotting summaries facilitate the interpretation and assessment of the calibration/optimisation results. We validate hydroPSO against the standard PSO algorithm (SPSO-2007) employing five test functions commonly used to assess the performance of optimisation algorithms. Additionally, we illustrate how the performance of the optimization/calibration engine is boosted by using several of the fine-tune options included in hydroPSO. Finally, we show how to interface SWAT-2005 with hydroPSO to calibrate a semi-distributed hydrological model for the Ega River basin in Spain, and how to interface MODFLOW-2000 and hydroPSO to calibrate a groundwater flow model for the regional aquifer of the Pampa del Tamarugal in Chile. We limit the applications of hydroPSO to study cases dealing with surface water and groundwater models as these two are the authors' areas of expertise. However, based on the flexibility of hydroPSO we believe this package can be implemented to any model code requiring some form of parameter estimation.

  18. A Comparison of Propulsion Concepts for SSTO Reusable Launchers

    NASA Astrophysics Data System (ADS)

    Varvill, R.; Bond, A.

    This paper discusses the relevant selection criteria for a single stage to orbit (SSTO) propulsion system and then reviews the characteristics of the typical engine types proposed for this role against these criteria. The engine types considered include Hydrogen/Oxygen (H2/O2) rockets, Scramjets, Turbojets, Turborockets and Liquid Air Cycle Engines. In the authors opinion none of the above engines are able to meet all the necessary criteria for an SSTO propulsion system simultaneously. However by selecting appropriate features from each it is possible to synthesise a new class of engines which are specifically optimised for the SSTO role. The resulting engines employ precooling of the airstream and a high internal pressure ratio to enable a relatively conventional high pressure rocket combustion chamber to be utilised in both airbreathing and rocket modes. This results in a significant mass saving with installation advantages which by careful design of the cycle thermodynamics enables the full potential of airbreathing to be realised. The SABRE engine which powers the SKYLON launch vehicle is an example of one of these so called `Precooled hybrid airbreathing rocket engines' and the concep- tual reasoning which leads to its main design parameters are described in the paper.

  19. Reducing emissions by using special air filters for internal combustion engines

    NASA Astrophysics Data System (ADS)

    Birtok-Băneasă, C.; Raţiu, S. A.; Alexa, V.; Crăciun, A. L.; Josan, A.; Budiul-Berghian, A.

    2017-05-01

    This paper presents the experimental methodology to carry out functional performance tests for an air filter with a particular design of its housing, generically named Super absorbing YXV „Air by Corneliu”, patented and homologated by the Romanian Automotive Registry, to which numerous prizes and medals were awarded at national and international innovations salons. The tests were carried out in the Internal Combustion Engines Laboratory, within the specialization “Road vehicles” belonging to the Faculty of Engineering Hunedoara, component of Politehnica University of Timisoara. The scope of the study is to optimise the air intake into the engine cylinders by reducing the gas-dynamic resistances caused by the air filter and, therefore, to achieve higher energy efficiency, i.e. fuel consumption reduction and engine performance increase. We present some comparative values of various operating parameters of the engine fitted, in the first measuring session, with the original filter, and then with the studied filter. The data collected shows a reduction in fuel consumption by using this type of filter, which leads to lower emissions.

  20. Lead-free bearing alloys for engine applications

    NASA Astrophysics Data System (ADS)

    Ratke, Lorenz; Ågren, John; Ludwig, Andreas; Tonn, Babette; Gránásy, László; Mathiesen, Ragnvald; Arnberg, Lars; Anger, Gerd; Reifenhäuser, Bernd; Lauer, Michael; Garen, Rune; Gust, Edgar

    2005-10-01

    Recent developments to reduce the fuel consumption, emission and air pollution, size and weight of engines for automotive, truck, ship propulsion and electrical power generation lead to temperature and load conditions within the engines that cannot be borne by conventional bearings. Presently, only costly multilayer bearings with electroplated or sputtered surface coatings can cope with the load/speed combinations required. Ecological considerations in recent years led to a ban by the European Commission on the use of lead in cars a problem for the standard bronze-lead bearing material. This MAP project is therefore developing an aluminium-based lead-free bearing material with sufficient hardness, wear and friction properties and good corrosion resistance. Only alloys made of components immiscible in the molten state can meet the demanding requirements. Space experimentation plays a crucial role in optimising the cast microstructure for such applications.

  1. Virtual Engine a Tool for Truck Reliability Increase

    NASA Astrophysics Data System (ADS)

    Stodola, Jiri; Novotny, Pavel

    2017-06-01

    The internal combustion engine development process requires CAD models which deliver results for the concept phase at a very early stage and which can be further detailed on the same program platform as the development process progresses. The vibratory and acoustic behaviour of the powertrain is highly complex, consisting of many components that are subject to loads that vary greatly in magnitude and which operate at a wide range of speeds. The interaction of the crank and crankcase is a major problem for powertrain designers when optimising the vibration and noise characteristics of the powertrain. The Finite Element Method (FEM) and Multi-Body Systems (MBS) are suitable for the creation of 3-D calculation models. Non-contact measurements make it possible to verify complex calculation models. All numerical simulations and measurements are performed on a Diesel six-cylinder in-line engine.

  2. Wideband and flat-gain amplifier based on high concentration erbium-doped fibres in parallel double-pass configuration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamida, B A; Cheng, X S; Harun, S W

    A wideband and flat gain erbium-doped fibre amplifier (EDFA) is demonstrated using a hybrid gain medium of a zirconiabased erbium-doped fibre (Zr-EDF) and a high concentration erbium-doped fibre (EDF). The amplifier has two stages comprising a 2-m-long ZEDF and 9-m-long EDF optimised for C- and L-band operations, respectively, in a double-pass parallel configuration. A chirp fibre Bragg grating (CFBG) is used in both stages to ensure double propagation of the signal and thus to increase the attainable gain in both C- and L-band regions. At an input signal power of 0 dBm, a flat gain of 15 dB is achievedmore » with a gain variation of less than 0.5 dB within a wide wavelength range from 1530 to 1605 nm. The corresponding noise figure varies from 6.2 to 10.8 dB within this wavelength region.« less

  3. An integrated control strategy for the composite braking system of an electric vehicle with independently driven axles

    NASA Astrophysics Data System (ADS)

    Sun, Fengchun; Liu, Wei; He, Hongwen; Guo, Hongqiang

    2016-08-01

    For an electric vehicle with independently driven axles, an integrated braking control strategy was proposed to coordinate the regenerative braking and the hydraulic braking. The integrated strategy includes three modes, namely the hybrid composite mode, the parallel composite mode and the pure hydraulic mode. For the hybrid composite mode and the parallel composite mode, the coefficients of distributing the braking force between the hydraulic braking and the two motors' regenerative braking were optimised offline, and the response surfaces related to the driving state parameters were established. Meanwhile, the six-sigma method was applied to deal with the uncertainty problems for reliability. Additionally, the pure hydraulic mode is activated to ensure the braking safety and stability when the predictive failure of the response surfaces occurs. Experimental results under given braking conditions showed that the braking requirements could be well met with high braking stability and energy regeneration rate, and the reliability of the braking strategy was guaranteed on general braking conditions.

  4. Concurrent Software Engineering Project

    ERIC Educational Resources Information Center

    Stankovic, Nenad; Tillo, Tammam

    2009-01-01

    Concurrent engineering or overlapping activities is a business strategy for schedule compression on large development projects. Design parameters and tasks from every aspect of a product's development process and their interdependencies are overlapped and worked on in parallel. Concurrent engineering suffers from negative effects such as excessive…

  5. Optimising import risk mitigation: anticipating the unintended consequences and competing risks of informal trade.

    PubMed

    Hueston, W; Travis, D; van Klink, E

    2011-04-01

    The effectiveness of risk mitigation may be compromised by informal trade, including illegal activities, parallel markets and extra-legal activities. While no regulatory system is 100% effective in eliminating the risk of disease transmission through animal and animal product trade, extreme risk aversion in formal import health regulations may increase informal trade, with the unintended consequence of creating additional risks outside regulatory purview. Optimal risk mitigation on a national scale requires scientifically sound yet flexible mitigation strategies that can address the competing risks of formal and informal trade. More robust risk analysis and creative engagement of nontraditional partners provide avenues for addressing informal trade.

  6. Optimal bioprocess design through a gene regulatory network - growth kinetic hybrid model: Towards Replacing Monod kinetics.

    PubMed

    Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios

    2018-05-02

    Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.

  7. Chaining direct memory access data transfer operations for compute nodes in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.

    2010-09-28

    Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet.

  8. Creating a Parallel Version of VisIt for Microsoft Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlock, B J; Biagas, K S; Rawson, P L

    2011-12-07

    VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less

  9. High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.

    1996-01-01

    This research program dealt with the application of high-performance computing methods to the numerical simulation of complete jet engines. The program was initiated in January 1993 by applying two-dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition and solution capabilities were successfully tested. Attention was then focused on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by these structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field fluid elements. New partitioned analysis procedures to treat this coupled three-component problem were developed during 1994 and 1995. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. For the global steady-state axisymmetric analysis of a complete engine we have decided to use the NASA-sponsored ENG10 program, which uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor tor parallel versions of ENG10 was developed. During 1995 and 1996 we developed the capability tor the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames. Benchmark results were presented at the 1196 Computational Aeroscience meeting.

  10. Engineering and improvement of the efficiency of a chimeric [P450cam-RhFRed reductase domain] enzyme.

    PubMed

    Robin, Aélig; Roberts, Gareth A; Kisch, Johannes; Sabbadin, Federico; Grogan, Gideon; Bruce, Neil; Turner, Nicholas J; Flitsch, Sabine L

    2009-05-14

    A chimeric oxygenase, in which the P450cam domain was fused to the reductase host domains of a P450RhF from Rhodococcus sp. strain NCIMB 9784 was optimised to allow for a biotransformation at 30 mM substrate in 80% overall yield, with the linker region between P450 and FMN domain proving to be important for the effective biotransformation of (+)-camphor to 5-exo-hydroxycamphor.

  11. P-HARP: A parallel dynamic spectral partitioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Biswas, R.; Simon, H.D.

    1997-05-01

    Partitioning unstructured graphs is central to the parallel solution of problems in computational science and engineering. The authors have introduced earlier the sequential version of an inertial spectral partitioner called HARP which maintains the quality of recursive spectral bisection (RSB) while forming the partitions an order of magnitude faster than RSB. The serial HARP is known to be the fastest spectral partitioner to date, three to four times faster than similar partitioners on a variety of meshes. This paper presents a parallel version of HARP, called P-HARP. Two types of parallelism have been exploited: loop level parallelism and recursive parallelism.more » P-HARP has been implemented in MPI on the SGI/Cray T3E and the IBM SP2. Experimental results demonstrate that P-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.25 seconds on a 64-processor T3E. Experimental results further show that P-HARP can give nearly a 20-fold speedup on 64 processors. These results indicate that graph partitioning is no longer a major bottleneck that hinders the advancement of computational science and engineering for dynamically-changing real-world applications.« less

  12. Learning from the Parallel Pathways of Makers to Broaden Pathways to Engineering

    ERIC Educational Resources Information Center

    Foster, Christina; Wigner, Aubrey; Lande, Micah; Jordan, Shawn S.

    2018-01-01

    Background: Makers are a growing community of STEM-minded people who bridge technical and non-technical backgrounds to imagine, build and fabricate engineering systems. Some have engineering training, some do not. This paper presents a study to explore the educational pathways of adult Makers and how they intersect with engineering. This research…

  13. The natural history of the sleep and respiratory engineering track at EMBC 1988 to 2010.

    PubMed

    Leder, Ron S; Schlotthauer, Gaston; Penzel, Thomas; Jane, Raimon

    2010-01-01

    Sleep science and respiratory engineering as medical subspecialties and research areas grew up side-by-side with biomedical engineering. The formation of EMBS in the 1950's and the discovery of REM sleep in the 1950's led to parallel development and interaction of sleep and biomedical engineering in diagnostics and therapeutics.

  14. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Gumaste, U.; Ronaghi, M.

    1994-01-01

    Applications are described of high-performance parallel, computation for the analysis of complete jet engines, considering its multi-discipline coupled problem. The coupled problem involves interaction of structures with gas dynamics, heat conduction and heat transfer in aircraft engines. The methodology issues addressed include: consistent discrete formulation of coupled problems with emphasis on coupling phenomena; effect of partitioning strategies, augmentation and temporal solution procedures; sensitivity of response to problem parameters; and methods for interfacing multiscale discretizations in different single fields. The computer implementation issues addressed include: parallel treatment of coupled systems; domain decomposition and mesh partitioning strategies; data representation in object-oriented form and mapping to hardware driven representation, and tradeoff studies between partitioning schemes and fully coupled treatment.

  15. Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Cheng, Larry

    2015-01-01

    This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design.

  16. Impacts of Technological Changes in the Cyber Environment on Software/Systems Engineering Workforce Development

    DTIC Science & Technology

    2010-04-01

    for decoupled parallel development Ref: Barry Boehm 12 Impacts of Technological Changes in the Cyber Environment on Software/Systems Engineering... Pressman , R.S., Software Engineering: A Practitioner’s Approach, 13 Impacts of Technological Changes in the Cyber Environment on Software/Systems

  17. Living biointerfaces based on non-pathogenic bacteria support stem cell differentiation

    NASA Astrophysics Data System (ADS)

    Hay, Jake J.; Rodrigo-Navarro, Aleixandre; Hassi, Karoliina; Moulisova, Vladimira; Dalby, Matthew J.; Salmeron-Sanchez, Manuel

    2016-02-01

    Lactococcus lactis, a non-pathogenic bacteria, has been genetically engineered to express the III7-10 fragment of human fibronectin as a membrane protein. The engineered L. lactis is able to develop biofilms on different surfaces (such as glass and synthetic polymers) and serves as a long-term substrate for mammalian cell culture, specifically human mesenchymal stem cells (hMSC). This system constitutes a living interface between biomaterials and stem cells. The engineered biofilms remain stable and viable for up to 28 days while the expressed fibronectin fragment induces hMSC adhesion. We have optimised conditions to allow long-term mammalian cell culture, and found that the biofilm is functionally equivalent to a fibronectin-coated surface in terms of osteoblastic differentiation using bone morphogenetic protein 2 (BMP-2) added to the medium. This living bacteria interface holds promise as a dynamic substrate for stem cell differentiation that can be further engineered to express other biochemical cues to control hMSC differentiation.

  18. Fast Whole-Engine Stirling Analysis

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2006-01-01

    This presentation discusses the simulation approach to whole-engine for physical consistency, REV regenerator modeling, grid layering for smoothness, and quality, conjugate heat transfer method adjustment, high-speed low cost parallel cluster, and debugging.

  19. Clinical image processing engine

    NASA Astrophysics Data System (ADS)

    Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald

    2009-02-01

    Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.

  20. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Chen, P.-S.; Gumaste, U.; Leoinne, M.; Stern, P.

    1995-01-01

    This research program deals with the application of high-performance computing methods to the numerical simulation of complete jet engines. The program was initiated in 1993 by applying two-dimensional parallel aeroelastic codes to the interior gas flow problem of a by-pass jet engine. The fluid mesh generation, domain decomposition and solution capabilities were successfully tested. Attention was then focused on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by these structural displacements. The latter is treated by an ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field fluid elements. New partitioned analysis procedures to treat this coupled 3-component problem were developed in 1994. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers. For the global steady-state axisymmetric analysis of a complete engine we have decided to use the NASA-sponsored ENG10 program, which uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 has been developed. It is planned to use the steady-state global solution provided by ENG10 as input to a localized three-dimensional FSI analysis for engine regions where aeroelastic effects may be important.

  1. The role of nuclear sensors and positrons for engineering nano and microtechnologies

    NASA Astrophysics Data System (ADS)

    Smith, Suzanne V.

    2011-01-01

    A sustainable nano-manufacturing future relies on optimisation of the design and synthetic approach, detailed understanding of structure/properties relationships and the ability to measure a products impact in the environment. This article outlines how bench-top PALS and nuclear techniques can be used in the routine analysis of a wide range of nanomaterials. Traditionally used in the semiconductor industry, PALS has proven to be useful not only in measuring porosity in polymeric materials but also in the monitoring of milling processes used to produce natural fibre powders. Nuclear sensors (radiotracers), designed to probe charge, size and hydrophilicity of nanomaterials, are used to evaluate the connectivity (availability) of these pores for interaction with media. Together they provide valuable information on structure/properties relationship of nanomaterials and insight into how the design of a material can be optimised. Furthermore, the highly sensitive nuclear sensors can be adapted for monitoring the impact of nanomaterials in vivo and the environment.

  2. Bringing a military approach to teaching.

    PubMed

    Baillie, Jonathan

    2015-03-01

    Despite having only established the company nine years ago, the founders of Kidderminster-based Avensys Medical believe the company now offers not only one of the UK's most comprehensive maintenance, repair, consultancy, and equipment audit services for medical and dental equipment, but also one of the most tailored training portfolios for electro-biomedical (EBME) engineers working in healthcare settings to enable them to get the best out of such equipment, improve patient safety, optimise service life, and save both the NHS and private sector money. As HEJ editor, Jonathan Baillie, discovered on meeting one of the two co-founders, ex-Royal Electrical and Mechanical Engineers (REME) artificer sergeant-major (ASM) and MoD engineering trainer, Robert Strange, many of the company's key trainers have a strong military background, and it is the rigorous and disciplined approach this enables them to bring to their training that he believes singles the company out.

  3. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  4. Information engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, D.N.

    1997-02-01

    The Information Engineering thrust area develops information technology to support the programmatic needs of Lawrence Livermore National Laboratory`s Engineering Directorate. Progress in five programmatic areas are described in separate reports contained herein. These are entitled Three-dimensional Object Creation, Manipulation, and Transport, Zephyr:A Secure Internet-Based Process to Streamline Engineering Procurements, Subcarrier Multiplexing: Optical Network Demonstrations, Parallel Optical Interconnect Technology Demonstration, and Intelligent Automation Architecture.

  5. Flow of GE90 Turbofan Engine Simulated

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1999-01-01

    The objective of this task was to create and validate a three-dimensional model of the GE90 turbofan engine (General Electric) using the APNASA (average passage) flow code. This was a joint effort between GE Aircraft Engines and the NASA Lewis Research Center. The goal was to perform an aerodynamic analysis of the engine primary flow path, in under 24 hours of CPU time, on a parallel distributed workstation system. Enhancements were made to the APNASA Navier-Stokes code to make it faster and more robust and to allow for the analysis of more arbitrary geometry. The resulting simulation exploited the use of parallel computations by using two levels of parallelism, with extremely high efficiency.The primary flow path of the GE90 turbofan consists of a nacelle and inlet, 49 blade rows of turbomachinery, and an exhaust nozzle. Secondary flows entering and exiting the primary flow path-such as bleed, purge, and cooling flows-were modeled macroscopically as source terms to accurately simulate the engine. The information on these source terms came from detailed descriptions of the cooling flow and from thermodynamic cycle system simulations. These provided boundary condition data to the three-dimensional analysis. A simplified combustor was used to feed boundary conditions to the turbomachinery. Flow simulations of the fan, high-pressure compressor, and high- and low-pressure turbines were completed with the APNASA code.

  6. High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.

    1997-01-01

    Applications are described of high-performance computing methods to the numerical simulation of complete jet engines. The methodology focuses on the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field elements. New partitioned analysis procedures to treat this coupled three-component problem were developed. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. The NASA-sponsored ENG10 program was used for the global steady state analysis of the whole engine. This program uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 was developed as well as the capability for the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames.

  7. Pooled effect of injection pressure and turbulence inducer piston on performance, combustion, and emission characteristics of a DI diesel engine powered with biodiesel blend.

    PubMed

    Isaac JoshuaRamesh Lalvani, J; Parthasarathy, M; Dhinesh, B; Annamalai, K

    2016-12-01

    In this study, the effect of injection pressure on combustion, performance, and emission characteristics of a diesel engine powered with turbulence inducer piston was studied. Engine tests were executed using conventional diesel and 20% blend of adelfa biodiesel [A20]. The results acquired from renewable fuel A20 in the conventional engine showed reduction in brake thermal efficiency being the result of poor air fuel mixing characteristics and the higher viscosity of the tested fuel. This prompted further research aiming at the improvement of turbulence for better air fuel mixing by a novel turbulence inducer piston [TIP]. The investigation was carried out to study the combined effect of injection pressure and turbulence inducer piston. Considerable improvement in the emission characteristics like hydrocarbon, carbon monoxide, smoke was acheived as a result of optimised injection pressure. Nevertheless, the nitrogen oxide emissions were slightly higher than those of the conventional unmodified engine. The engine with turbulence inducer piston shows the scope for reducing the major pollution and thus ensures environmental safety. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    PubMed

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Pacing a data transfer operation between compute nodes on a parallel computer

    DOEpatents

    Blocksome, Michael A [Rochester, MN

    2011-09-13

    Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

  10. Focal ratio degradation: a new perspective

    NASA Astrophysics Data System (ADS)

    Haynes, Dionne M.; Withford, Michael J.; Dawes, Judith M.; Haynes, Roger; Bland-Hawthorn, Joss

    2008-07-01

    We have developed an alternative FRD empirical model for the parallel laser beam technique which can accommodate contributions from both scattering and modal diffusion. It is consistent with scattering inducing a Lorentzian contribution and modal diffusion inducing a Gaussian contribution. The convolution of these two functions produces a Voigt function which is shown to better simulate the observed behavior of the FRD distribution and provides a greatly improved fit over the standard Gaussian fitting approach. The Voigt model can also be used to quantify the amount of energy displaced by FRD, therefore allowing astronomical instrument scientists to identify, quantify and potentially minimize the various sources of FRD, and optimise the fiber and instrument performance.

  11. Can generic knee joint models improve the measurement of osteoarthritic knee kinematics during squatting activity?

    PubMed

    Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A

    2017-01-01

    Knee joint kinematics derived from multi-body optimisation (MBO) still requires evaluation. The objective of this study was to corroborate model-derived kinematics of osteoarthritic knees obtained using four generic knee joint models used in musculoskeletal modelling - spherical, hinge, degree-of-freedom coupling curves and parallel mechanism - against reference knee kinematics measured by stereo-radiography. Root mean square errors ranged from 0.7° to 23.4° for knee rotations and from 0.6 to 9.0 mm for knee displacements. Model-derived knee kinematics computed from generic knee joint models was inaccurate. Future developments and experiments should improve the reliability of osteoarthritic knee models in MBO and musculoskeletal modelling.

  12. Resonant tunneling based graphene quantum dot memristors.

    PubMed

    Pan, Xuan; Skafidas, Efstratios

    2016-12-08

    In this paper, we model two-terminal all graphene quantum dot (GQD) based resistor-type memory devices (memristors). The resistive switching is achieved by resonant electron tunneling. We show that parallel GQDs can be used to create multi-state memory circuits. The number of states can be optimised with additional voltage sources, whilst the noise margin for each state can be controlled by appropriately choosing the branch resistance. A three-terminal GQD device configuration is also studied. The addition of an isolated gate terminal can be used to add further or modify the states of the memory device. The proposed devices provide a promising route towards volatile memory devices utilizing only atomically thin two-dimensional graphene.

  13. Study of Background Rejection Systems for the IXO Mission.

    NASA Astrophysics Data System (ADS)

    Laurent, Philippe; Limousin, O.; Tatischeff, V.

    2009-01-01

    The scientific performances of the IXO mission will necessitate a very low detector background level. This will imply thorough background simulations, and efficient background rejection systems. It necessitates also a very good knowledge of the detectors to be shielded. In APC, Paris, and CEA, Saclay, we got experience on these activities by conceiving and optimising in parallel the high energy detector and the active and passive background rejection system of the Simbol-X mission. Considering that this work may be naturally extended to other X-ray missions, we have initiated with CNES a R&D project on the study of background rejection systems mainly in view the IXO project. We will detail this activity in the poster.

  14. DEPOSITION DISTRICUTION AMONG THE PARALLEL PATHWAYS IN THE HUMAN LUNG CONDUCTING AIRWAY STRUCTURE.

    EPA Science Inventory

    DEPOSITION DISTRIBUTION AMONG THE PARALLEL PATHWAYS IN THE HUMAN LUNG CONDUCTING AIRWAY STRUCTURE. Chong S. Kim*, USEPA National Health and Environmental Effects Research Lab. RTP, NC 27711; Z. Zhang and C. Kleinstreuer, Department of Mechanical and Aerospace Engineering, North C...

  15. Mathematics for Physicists and Engineers.

    ERIC Educational Resources Information Center

    Organisation for Economic Cooperation and Development, Paris (France).

    The text is a report of the OEEC Seminar on "The Mathematical Knowledge Required by the Physicist and Engineer" held in Paris, 1961. There are twelve major papers presented: (1) An American Parallel (describes the work of the Panel on Physical Sciences and Engineering of the Committee on the Undergraduate Program in Mathematics of the Mathematical…

  16. Surface Modification Engineered Assembly of Novel Quantum Dot Architectures for Advanced Applications

    DTIC Science & Technology

    2008-02-09

    Campbell, S. Ogata, and F. Shimojo, “ Multimillion atom simulations of nanosystems on parallel computers,” in Proceedings of the International...nanomesas: multimillion -atom molecular dynamics simulations on parallel computers,” J. Appl. Phys. 94, 6762 (2003). 21. P. Vashishta, R. K. Kalia...and A. Nakano, “ Multimillion atom molecular dynamics simulations of nanoparticles on parallel computers,” Journal of Nanoparticle Research 5, 119-135

  17. Partitioning and packing mathematical simulation models for calculation on parallel computers

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.; Milner, E. J.

    1986-01-01

    The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.

  18. Method of vibration isolating an aircraft engine

    NASA Technical Reports Server (NTRS)

    Bender, Stanley I. (Inventor); Butler, Lawrence (Inventor); Dawes, Peter W. (Inventor)

    1991-01-01

    A method for coupling an engine to a support frame for mounting to a fuselage of an aircraft using a three point vibration isolating mounting system in which the load reactive forces at each mounting point are statically and dynamically determined. A first vibration isolating mount pivotably couples a first end of an elongated support beam to a stator portion of an engine with the pivoting action of the vibration mount being oriented such that it is pivotable about a line parallel to a center line of the engine. An aft end of the supporting frame is coupled to the engine through an additional pair of vibration isolating mounts with the mounts being oriented such that they are pivotable about a circumference of the engine. The aft mounts are symmetrically spaced to each side of the supporting frame by 45 degrees. The relative orientation between the front mount and the pair of rear mounts is such that only the rear mounts provide load reactive forces parallel to the engine center line, in support of the engine to the aircraft against thrust forces. The forward mount is oriented so as to provide only radial forces to the engine and some lifting forces to maintain the engine in position adjacent a fuselage. Since each mount is connected to provide specific forces to support the engine, forces required of each mount are statically and dynamically determinable.

  19. Development of Supersonic Vehicle for Demonstration of a Precooled Turbojet Engine

    NASA Astrophysics Data System (ADS)

    Sawai, Shujiro; Fujita, Kazuhisa; Kobayashi, Hiroaki; Sakai, Shin'ichiro; Bando, Nobutaka; Kadooka, Shouhei; Tsuboi, Nobuyuki; Miyaji, Koji; Uchiyama, Taku; Hashimoto, Tatsuaki

    JAXA is developing Mach 5 hypersonic turbojet engine technology that can be applied in a future hypersonic transport. Now, Jet Engine Technology Research Center of JAXA conducts the experimental study using a 1 / 10 scale-model engine. In parallel to engine development activities, a new supersonic flight-testing vehicle for the hypersonic turbojet engine is under development since 2004. In this paper, the system configuration of the flight-testing vehicle is outlined and development status is reported.

  20. Decision tables and rule engines in organ allocation systems for optimal transparency and flexibility.

    PubMed

    Schaafsma, Murk; van der Deijl, Wilfred; Smits, Jacqueline M; Rahmel, Axel O; de Vries Robbé, Pieter F; Hoitsma, Andries J

    2011-05-01

    Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with the currently used ETKAS by running 11,000 historical match runs and by running the rule engine in parallel with the ETKAS on our allocation system. Decision tables were easy to implement and successful in verifying correctness, completeness, and consistency. The outcomes of the 11,000 historical matches in the rule engine and the ETKAS were exactly the same. Running the rule engine simultaneously in parallel and in real time with the ETKAS also produced no differences. Specifying organ allocation rules in decision tables is already a great step forward in enhancing the clarity of the systems. Yet, using these tables as rule engine input for matches optimizes the flexibility, simplicity and clarity of the whole process, from specification to the performed matches, and in addition this new method allows well controlled simulations. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.

  1. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  2. Teachers' Mathematics as Mathematics-at-Work

    ERIC Educational Resources Information Center

    Bednarz, Nadine; Proulx, Jérôme

    2017-01-01

    Through recognising mathematics teachers as professionals who use mathematics in their workplace, this article traces a parallel between the mathematics enacted by teachers in their practice and the mathematics used in workplaces found in studies of professionals (e.g. nurses, engineers, bankers). This parallel is developed through the five…

  3. On the Development of an Efficient Parallel Hybrid Solver with Application to Acoustically Treated Aero-Engine Nacelles

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj

    2006-01-01

    A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.

  4. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    PubMed Central

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718

  5. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    PubMed

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  6. Power-balancing instantaneous optimization energy management for a novel series-parallel hybrid electric bus

    NASA Astrophysics Data System (ADS)

    Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao

    2012-11-01

    Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.

  7. Construction and characterization of VL-VH tail-parallel genetically engineered antibodies against staphylococcal enterotoxins.

    PubMed

    He, Xianzhi; Zhang, Lei; Liu, Pengchong; Liu, Li; Deng, Hui; Huang, Jinhai

    2015-03-01

    Staphylococcal enterotoxins (SEs) produced by Staphylococcus aureus have increasingly given rise to human health and food safety. Genetically engineered small molecular antibody is a useful tool in immuno-detection and treatment for clinical illness caused by SEs. In this study, we constructed the V(L)-V(H) tail-parallel genetically engineered antibody against SEs by using the repertoire of rearranged germ-line immunoglobulin variable region genes. Total RNA were extracted from six hybridoma cell lines that stably express anti-SEs antibodies. The variable region genes of light chain (V(L)) and heavy chain (V(H)) were cloned by reverse transcription PCR, and their classical murine antibody structure and functional V(D)J gene rearrangement were analyzed. To construct the eukaryotic V(H)-V(L) tail-parallel co-expression vectors based on the "5'-V(H)-ivs-IRES-V(L)-3'" mode, the ivs-IRES fragment and V(L) genes were spliced by two-step overlap extension PCR, and then, the recombined gene fragment and V(H) genes were inserted into the pcDNA3.1(+) expression vector sequentially. And then the constructed eukaryotic expression clones termed as p2C2HILO and p5C12HILO were transfected into baby hamster kidney 21 cell line, respectively. Two clonal cell lines stably expressing V(L)-V(H) tail-parallel antibodies against SEs were obtained, and the antibodies that expressed intracytoplasma were evaluated by enzyme-linked immunosorbent assay, immunofluorescence assay, and flow cytometry. SEs can stimulate the expression of some chemokines and chemokine receptors in porcine IPEC-J2 cells; mRNA transcription level of four chemokines and chemokine receptors can be blocked by the recombinant SE antibody prepared in this study. Our results showed that it is possible to get functional V(L)-V(H) tail-parallel genetically engineered antibodies in same vector using eukaryotic expression system.

  8. A Parallel Relational Database Management System Approach to Relevance Feedback in Information Retrieval.

    ERIC Educational Resources Information Center

    Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David

    1999-01-01

    Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…

  9. 46 CFR 111.12-7 - Voltage regulation and parallel operation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Voltage regulation and parallel operation. 111.12-7 Section 111.12-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-7 Voltage regulation and...

  10. 46 CFR 111.12-7 - Voltage regulation and parallel operation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Voltage regulation and parallel operation. 111.12-7 Section 111.12-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-7 Voltage regulation and...

  11. 46 CFR 111.12-7 - Voltage regulation and parallel operation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Voltage regulation and parallel operation. 111.12-7 Section 111.12-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-7 Voltage regulation and...

  12. 46 CFR 111.12-7 - Voltage regulation and parallel operation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Voltage regulation and parallel operation. 111.12-7 Section 111.12-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-7 Voltage regulation and...

  13. Knee Kinematics Estimation Using Multi-Body Optimisation Embedding a Knee Joint Stiffness Matrix: A Feasibility Study.

    PubMed

    Richard, Vincent; Lamberto, Giuliano; Lu, Tung-Wu; Cappozzo, Aurelio; Dumas, Raphaël

    2016-01-01

    The use of multi-body optimisation (MBO) to estimate joint kinematics from stereophotogrammetric data while compensating for soft tissue artefact is still open to debate. Presently used joint models embedded in MBO, such as mechanical linkages, constitute a considerable simplification of joint function, preventing a detailed understanding of it. The present study proposes a knee joint model where femur and tibia are represented as rigid bodies connected through an elastic element the behaviour of which is described by a single stiffness matrix. The deformation energy, computed from the stiffness matrix and joint angles and displacements, is minimised within the MBO. Implemented as a "soft" constraint using a penalty-based method, this elastic joint description challenges the strictness of "hard" constraints. In this study, estimates of knee kinematics obtained using MBO embedding four different knee joint models (i.e., no constraints, spherical joint, parallel mechanism, and elastic joint) were compared against reference kinematics measured using bi-planar fluoroscopy on two healthy subjects ascending stairs. Bland-Altman analysis and sensitivity analysis investigating the influence of variations in the stiffness matrix terms on the estimated kinematics substantiate the conclusions. The difference between the reference knee joint angles and displacements and the corresponding estimates obtained using MBO embedding the stiffness matrix showed an average bias and standard deviation for kinematics of 0.9±3.2° and 1.6±2.3 mm. These values were lower than when no joint constraints (1.1±3.8°, 2.4±4.1 mm) or a parallel mechanism (7.7±3.6°, 1.6±1.7 mm) were used and were comparable to the values obtained with a spherical joint (1.0±3.2°, 1.3±1.9 mm). The study demonstrated the feasibility of substituting an elastic joint for more classic joint constraints in MBO.

  14. Knee Kinematics Estimation Using Multi-Body Optimisation Embedding a Knee Joint Stiffness Matrix: A Feasibility Study

    PubMed Central

    Richard, Vincent; Lamberto, Giuliano; Lu, Tung-Wu; Cappozzo, Aurelio; Dumas, Raphaël

    2016-01-01

    The use of multi-body optimisation (MBO) to estimate joint kinematics from stereophotogrammetric data while compensating for soft tissue artefact is still open to debate. Presently used joint models embedded in MBO, such as mechanical linkages, constitute a considerable simplification of joint function, preventing a detailed understanding of it. The present study proposes a knee joint model where femur and tibia are represented as rigid bodies connected through an elastic element the behaviour of which is described by a single stiffness matrix. The deformation energy, computed from the stiffness matrix and joint angles and displacements, is minimised within the MBO. Implemented as a “soft” constraint using a penalty-based method, this elastic joint description challenges the strictness of “hard” constraints. In this study, estimates of knee kinematics obtained using MBO embedding four different knee joint models (i.e., no constraints, spherical joint, parallel mechanism, and elastic joint) were compared against reference kinematics measured using bi-planar fluoroscopy on two healthy subjects ascending stairs. Bland-Altman analysis and sensitivity analysis investigating the influence of variations in the stiffness matrix terms on the estimated kinematics substantiate the conclusions. The difference between the reference knee joint angles and displacements and the corresponding estimates obtained using MBO embedding the stiffness matrix showed an average bias and standard deviation for kinematics of 0.9±3.2° and 1.6±2.3 mm. These values were lower than when no joint constraints (1.1±3.8°, 2.4±4.1 mm) or a parallel mechanism (7.7±3.6°, 1.6±1.7 mm) were used and were comparable to the values obtained with a spherical joint (1.0±3.2°, 1.3±1.9 mm). The study demonstrated the feasibility of substituting an elastic joint for more classic joint constraints in MBO. PMID:27314586

  15. Scalable Visual Analytics of Massive Textual Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.

    2007-04-01

    This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.

  16. A real time microcomputer implementation of sensor failure detection for turbofan engines

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1989-01-01

    An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.

  17. High temperature turbine engine structure

    DOEpatents

    Boyd, Gary L.

    1991-01-01

    A high temperature turbine engine includes a rotor portion having axially stacked adjacent ceramic rotor parts. A ceramic/ceramic joint structure transmits torque between the rotor parts while maintaining coaxial alignment and axially spaced mutually parallel relation thereof despite thermal and centrifugal cycling.

  18. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.

  19. Idle speed and fuel vapor recovery control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orzel, D.V.

    1993-06-01

    A method for controlling idling speed of an engine via bypass throttle connected in parallel to a primary engine throttle and for controlling purge flow through a vapor recovery system into an air/fuel intake of the engine is described, comprising the steps of: positioning the bypass throttle to decrease any difference between a desired engine idle speed and actual engine idle speed; and decreasing the purge flow when said bypass throttle position is less than a preselected fraction of a maximum bypass throttle position.

  20. Teaching ethics to engineers: ethical decision making parallels the engineering design process.

    PubMed

    Bero, Bridget; Kuhlman, Alana

    2011-09-01

    In order to fulfill ABET requirements, Northern Arizona University's Civil and Environmental engineering programs incorporate professional ethics in several of its engineering courses. This paper discusses an ethics module in a 3rd year engineering design course that focuses on the design process and technical writing. Engineering students early in their student careers generally possess good black/white critical thinking skills on technical issues. Engineering design is the first time students are exposed to "grey" or multiple possible solution technical problems. To identify and solve these problems, the engineering design process is used. Ethical problems are also "grey" problems and present similar challenges to students. Students need a practical tool for solving these ethical problems. The step-wise engineering design process was used as a model to demonstrate a similar process for ethical situations. The ethical decision making process of Martin and Schinzinger was adapted for parallelism to the design process and presented to students as a step-wise technique for identification of the pertinent ethical issues, relevant moral theories, possible outcomes and a final decision. Students had greatest difficulty identifying the broader, global issues presented in an ethical situation, but by the end of the module, were better able to not only identify the broader issues, but also to more comprehensively assess specific issues, generate solutions and a desired response to the issue.

  1. TRANSMISSION NETWORK PLANNING METHOD FOR COMPARATIVE STUDIES (JOURNAL VERSION)

    EPA Science Inventory

    An automated transmission network planning method for comparative studies is presented. This method employs logical steps that may closely parallel those taken in practice by the planning engineers. Use is made of a sensitivity matrix to simulate the engineers' experience in sele...

  2. Model-Based Systems Engineering in the Execution of Search and Rescue Operations

    DTIC Science & Technology

    2015-09-01

    OSC can fulfill the duties of an ACO but it may make sense to split the duties if there are no communication links between the OSC and participating...parallel mode. This mode is the most powerful option because it 35 creates sequence diagrams that generate parallel “ swim lanes” for each asset...greater flexibility is desired, sequence mode generates diagrams based purely on sequential action and activity diagrams without the parallel “ swim lanes

  3. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    NASA Astrophysics Data System (ADS)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  4. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  5. EDIN design study alternate space shuttle booster replacement concepts. Volume 1: Engineering analysis

    NASA Technical Reports Server (NTRS)

    Demakes, P. T.; Hirsch, G. N.; Stewart, W. A.; Glatt, C. R.

    1976-01-01

    The use of a recoverable liquid rocket booster (LRB) system to replace the existing solid rocket booster (SRB) system for the shuttle was studied. Historical weight estimating relationships were developed for the LRB using Saturn technology and modified as required. Mission performance was computed using February 1975 shuttle configuration groundrules to allow reasonable comparison of the existing shuttle with the study designs. The launch trajectory was constrained to pass through both the RTLS/AOA and main engine cut off points of the shuttle reference mission 1. Performance analysis is based on a point design trajectory model which optimizes initial tilt rate and exoatmospheric pitch profile. A gravity turn was employed during the boost phase in place of the shuttle angle of attack profile. Engine throttling add/or shutdown was used to constrain dynamic pressure and/or longitudinal acceleration where necessary. Four basic configurations were investigated: a parallel burn vehicle with an F-1 engine powered LRB; a parallel burn vehicle with a high pressure engine powered LRB; a series burn vehicle with a high pressure engine powered LRB. The relative sizes of the LRB and the ET are optimized to minimize GLOW in most cases.

  6. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.

  7. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  8. Parallel Performance of a Combustion Chemistry Simulation

    DOE PAGES

    Skinner, Gregg; Eigenmann, Rudolf

    1995-01-01

    We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.

  9. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

    1998-01-01

    Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

  10. Sequential color video to parallel color video converter

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.

  11. 1998 IEEE Aerospace Conference. Proceedings.

    NASA Astrophysics Data System (ADS)

    The following topics were covered: science frontiers and aerospace; flight systems technologies; spacecraft attitude determination and control; space power systems; smart structures and dynamics; military avionics; electronic packaging; MEMS; hyperspectral remote sensing for GVP; space laser technology; pointing, control, tracking and stabilization technologies; payload support technologies; protection technologies; 21st century space mission management and design; aircraft flight testing; aerospace test and evaluation; small satellites and enabling technologies; systems design optimisation; advanced launch vehicles; GPS applications and technologies; antennas and radar; software and systems engineering; scalable systems; communications; target tracking applications; remote sensing; advanced sensors; and optoelectronics.

  12. Scientific discovery as a combinatorial optimisation problem: How best to navigate the landscape of possible experiments?

    PubMed Central

    Kell, Douglas B

    2012-01-01

    A considerable number of areas of bioscience, including gene and drug discovery, metabolic engineering for the biotechnological improvement of organisms, and the processes of natural and directed evolution, are best viewed in terms of a ‘landscape’ representing a large search space of possible solutions or experiments populated by a considerably smaller number of actual solutions that then emerge. This is what makes these problems ‘hard’, but as such these are to be seen as combinatorial optimisation problems that are best attacked by heuristic methods known from that field. Such landscapes, which may also represent or include multiple objectives, are effectively modelled in silico, with modern active learning algorithms such as those based on Darwinian evolution providing guidance, using existing knowledge, as to what is the ‘best’ experiment to do next. An awareness, and the application, of these methods can thereby enhance the scientific discovery process considerably. This analysis fits comfortably with an emerging epistemology that sees scientific reasoning, the search for solutions, and scientific discovery as Bayesian processes. PMID:22252984

  13. Optimization of a multi-channel parabolic guide for the material science diffractometer STRESS-SPEC at FRM II

    NASA Astrophysics Data System (ADS)

    Rebelo Kornmeier, Joana; Ostermann, Andreas; Hofmann, Michael; Gibmeier, Jens

    2014-02-01

    Neutron strain diffractometers usually use slits to define a gauge volume within engineering samples. In this study a multi-channel parabolic neutron guide was developed to be used instead of the primary slit to minimise the loss of intensity and vertical definition of the gauge volume when using slits placed far away from the measurement position in bulky components. The major advantage of a focusing guide is that the maximum flux is not at the exit of the guide as for a slit system but at the focal point relatively far away from the exit of the guide. Monte Carlo simulations were used to optimise the multi-channel parabolic guide with respect to the instrument characteristics of the diffractometer STRESS-SPEC at the FRM II neutron source. Also the simulations are in excellent agreement with experimental measurements using the optimised multi-channel parabolic guide at the neutron diffractometer. In addition the performance of the guide was compared to the standard slit setup at STRESS-SPEC using a single bead weld sample used in earlier round robin tests for residual strain measurements.

  14. Scientific discovery as a combinatorial optimisation problem: how best to navigate the landscape of possible experiments?

    PubMed

    Kell, Douglas B

    2012-03-01

    A considerable number of areas of bioscience, including gene and drug discovery, metabolic engineering for the biotechnological improvement of organisms, and the processes of natural and directed evolution, are best viewed in terms of a 'landscape' representing a large search space of possible solutions or experiments populated by a considerably smaller number of actual solutions that then emerge. This is what makes these problems 'hard', but as such these are to be seen as combinatorial optimisation problems that are best attacked by heuristic methods known from that field. Such landscapes, which may also represent or include multiple objectives, are effectively modelled in silico, with modern active learning algorithms such as those based on Darwinian evolution providing guidance, using existing knowledge, as to what is the 'best' experiment to do next. An awareness, and the application, of these methods can thereby enhance the scientific discovery process considerably. This analysis fits comfortably with an emerging epistemology that sees scientific reasoning, the search for solutions, and scientific discovery as Bayesian processes. Copyright © 2012 WILEY Periodicals, Inc.

  15. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  16. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE PAGES

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...

    2017-04-24

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  17. Genetically improved BarraCUDA.

    PubMed

    Langdon, W B; Lam, Brian Yee Hong

    2017-01-01

    BarraCUDA is an open source C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. Recently its source code was optimised using "Genetic Improvement". The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60% more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU BarraCUDA running on a single K80 Tesla GPU can align short paired end nextGen sequences up to ten times faster than bwa on a 12 core server. The speed up was such that the GI version was adopted and has been regularly downloaded from SourceForge for more than 12 months.

  18. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  19. Bioprocess systems engineering: transferring traditional process engineering principles to industrial biotechnology.

    PubMed

    Koutinas, Michalis; Kiparissides, Alexandros; Pistikopoulos, Efstratios N; Mantalaris, Athanasios

    2012-01-01

    The complexity of the regulatory network and the interactions that occur in the intracellular environment of microorganisms highlight the importance in developing tractable mechanistic models of cellular functions and systematic approaches for modelling biological systems. To this end, the existing process systems engineering approaches can serve as a vehicle for understanding, integrating and designing biological systems and processes. Here, we review the application of a holistic approach for the development of mathematical models of biological systems, from the initial conception of the model to its final application in model-based control and optimisation. We also discuss the use of mechanistic models that account for gene regulation, in an attempt to advance the empirical expressions traditionally used to describe micro-organism growth kinetics, and we highlight current and future challenges in mathematical biology. The modelling research framework discussed herein could prove beneficial for the design of optimal bioprocesses, employing rational and feasible approaches towards the efficient production of chemicals and pharmaceuticals.

  20. Bioprocess systems engineering: transferring traditional process engineering principles to industrial biotechnology

    PubMed Central

    Koutinas, Michalis; Kiparissides, Alexandros; Pistikopoulos, Efstratios N.; Mantalaris, Athanasios

    2013-01-01

    The complexity of the regulatory network and the interactions that occur in the intracellular environment of microorganisms highlight the importance in developing tractable mechanistic models of cellular functions and systematic approaches for modelling biological systems. To this end, the existing process systems engineering approaches can serve as a vehicle for understanding, integrating and designing biological systems and processes. Here, we review the application of a holistic approach for the development of mathematical models of biological systems, from the initial conception of the model to its final application in model-based control and optimisation. We also discuss the use of mechanistic models that account for gene regulation, in an attempt to advance the empirical expressions traditionally used to describe micro-organism growth kinetics, and we highlight current and future challenges in mathematical biology. The modelling research framework discussed herein could prove beneficial for the design of optimal bioprocesses, employing rational and feasible approaches towards the efficient production of chemicals and pharmaceuticals. PMID:24688682

  1. Multi-agent modelling framework for water, energy and other resource networks

    NASA Astrophysics Data System (ADS)

    Knox, S.; Selby, P. D.; Meier, P.; Harou, J. J.; Yoon, J.; Lachaut, T.; Klassert, C. J. A.; Avisse, N.; Mohamed, K.; Tomlinson, J.; Khadem, M.; Tilmant, A.; Gorelick, S.

    2015-12-01

    Bespoke modelling tools are often needed when planning future engineered interventions in the context of various climate, socio-economic and geopolitical futures. Such tools can help improve system operating policies or assess infrastructure upgrades and their risks. A frequently used approach is to simulate and/or optimise the impact of interventions in engineered systems. Modelling complex infrastructure systems can involve incorporating multiple aspects into a single model, for example physical, economic and political. This presents the challenge of combining research from diverse areas into a single system effectively. We present the Pynsim 'Python Network Simulator' framework, a library for building simulation models capable of representing, the physical, institutional and economic aspects of an engineered resources system. Pynsim is an open source, object oriented code aiming to promote integration of different modelling processes through a single code library. We present two case studies that demonstrate important features of Pynsim's design. The first is a large interdisciplinary project of a national water system in the Middle East with modellers from fields including water resources, economics, hydrology and geography each considering different facets of a multi agent system. It includes: modelling water supply and demand for households and farms; a water tanker market with transfer of water between farms and households, and policy decisions made by government institutions at district, national and international level. This study demonstrates that a well-structured library of code can provide a hub for development and act as a catalyst for integrating models. The second focuses on optimising the location of new run-of-river hydropower plants. Using a multi-objective evolutionary algorithm, this study analyses different network configurations to identify the optimal placement of new power plants within a river network. This demonstrates that Pynsim can be used to evaluate a multitude of topologies for identifying the optimal location of infrastructure investments. Pynsim is available on GitHub or via standard python installer packages such as pip. It comes with several examples and online documentation, making it attractive for those less experienced in software engineering.

  2. Engineering and Computing Portal to Solve Environmental Problems

    NASA Astrophysics Data System (ADS)

    Gudov, A. M.; Zavozkin, S. Y.; Sotnikov, I. Y.

    2018-01-01

    This paper describes architecture and services of the Engineering and Computing Portal, which is considered to be a complex solution that provides access to high-performance computing resources, enables to carry out computational experiments, teach parallel technologies and solve computing tasks, including technogenic safety ones.

  3. Modeling and analysis of the TF30-P-3 compressor system with inlet pressure distortion

    NASA Technical Reports Server (NTRS)

    Mazzawy, R. S.; Banks, G. A.

    1976-01-01

    Circumferential inlet distortion testing of a TF30-P-3 afterburning turbofan engine was conducted at NASA-Lewis Research Center. Pratt and Whitney Aircraft analyzed the data using its multiple segment parallel compressor model and classical compressor theory. Distortion attenuation analysis resulted in a detailed flow field calculation with good agreement between multiple segment model predictions and the test data. Sensitivity of the engine stall line to circumferential inlet distortion was calculated on the basis of parallel compressor theory to be more severe than indicated by the data. However, the calculated stall site location was in agreement with high response instrumentation measurements.

  4. Aquatic therapy for boys with Duchenne muscular dystrophy (DMD): an external pilot randomised controlled trial.

    PubMed

    Hind, Daniel; Parkin, James; Whitworth, Victoria; Rex, Saleema; Young, Tracey; Hampson, Lisa; Sheehan, Jennie; Maguire, Chin; Cantrill, Hannah; Scott, Elaine; Epps, Heather; Main, Marion; Geary, Michelle; McMurchie, Heather; Pallant, Lindsey; Woods, Daniel; Freeman, Jennifer; Lee, Ellen; Eagle, Michelle; Willis, Tracey; Muntoni, Francesco; Baxter, Peter

    2017-01-01

    Standard treatment of Duchenne muscular dystrophy (DMD) includes regular physiotherapy. There are no data to show whether adding aquatic therapy (AT) to land-based exercises helps maintain motor function. We assessed the feasibility of recruiting and collecting data from boys with DMD in a parallel-group pilot randomised trial (primary objective), also assessing how intervention and trial procedures work. Ambulant boys with DMD aged 7-16 years established on steroids, with North Star Ambulatory Assessment (NSAA) score ≥8, who were able to complete a 10-m walk test without aids or assistance, were randomly allocated (1:1) to 6 months of either optimised land-based exercises 4 to 6 days/week, defined by local community physiotherapists, or the same 4 days/week plus AT 2 days/week. Those unable to commit to a programme, with >20% variation between NSAA scores 4 weeks apart, or contraindications to AT were excluded. The main outcome measures included feasibility of recruiting 40 participants in 6 months from six UK centres, clinical outcomes including NSAA, independent assessment of treatment optimisation, participant/therapist views on acceptability of intervention and research protocols, value of information (VoI) analysis and cost-impact analysis. Over 6 months, 348 boys were screened: most lived too far from centres or were enrolled in other trials; 12 (30% of the targets) were randomised to AT ( n  = 8) or control ( n  = 4). The mean change in NSAA at 6 months was -5.5 (SD 7.8) in the control arm and -2.8 (SD 4.1) in the AT arm. Harms included fatigue in two boys, pain in one. Physiotherapists and parents valued AT but believed it should be delivered in community settings. Randomisation was unattractive to families, who had already decided that AT was useful and who often preferred to enrol in drug studies. The AT prescription was considered to be optimised for three boys, with other boys given programmes that were too extensive and insufficiently focused. Recruitment was insufficient for VoI analysis. Neither a UK-based RCT of AT nor a twice weekly AT therapy delivered at tertiary centres is feasible. Our study will help in the optimisation of AT service provision and the design of future research. ISRCTN41002956.

  5. Accelerating clinical development of HIV vaccine strategies: methodological challenges and considerations in constructing an optimised multi-arm phase I/II trial design.

    PubMed

    Richert, Laura; Doussau, Adélaïde; Lelièvre, Jean-Daniel; Arnold, Vincent; Rieux, Véronique; Bouakane, Amel; Lévy, Yves; Chêne, Geneviève; Thiébaut, Rodolphe

    2014-02-26

    Many candidate vaccine strategies against human immunodeficiency virus (HIV) infection are under study, but their clinical development is lengthy and iterative. To accelerate HIV vaccine development optimised trial designs are needed. We propose a randomised multi-arm phase I/II design for early stage development of several vaccine strategies, aiming at rapidly discarding those that are unsafe or non-immunogenic. We explored early stage designs to evaluate both the safety and the immunogenicity of four heterologous prime-boost HIV vaccine strategies in parallel. One of the vaccines used as a prime and boost in the different strategies (vaccine 1) has yet to be tested in humans, thus requiring a phase I safety evaluation. However, its toxicity risk is considered minimal based on data from similar vaccines. We newly adapted a randomised phase II trial by integrating an early safety decision rule, emulating that of a phase I study. We evaluated the operating characteristics of the proposed design in simulation studies with either a fixed-sample frequentist or a continuous Bayesian safety decision rule and projected timelines for the trial. We propose a randomised four-arm phase I/II design with two independent binary endpoints for safety and immunogenicity. Immunogenicity evaluation at trial end is based on a single-stage Fleming design per arm, comparing the observed proportion of responders in an immunogenicity screening assay to an unacceptably low proportion, without direct comparisons between arms. Randomisation limits heterogeneity in volunteer characteristics between arms. To avoid exposure of additional participants to an unsafe vaccine during the vaccine boost phase, an early safety decision rule is imposed on the arm starting with vaccine 1 injections. In simulations of the design with either decision rule, the risks of erroneous conclusions were controlled <15%. Flexibility in trial conduct is greater with the continuous Bayesian rule. A 12-month gain in timelines is expected by this optimised design. Other existing designs such as bivariate or seamless phase I/II designs did not offer a clear-cut alternative. By combining phase I and phase II evaluations in a multi-arm trial, the proposed optimised design allows for accelerating early stage clinical development of HIV vaccine strategies.

  6. Systems Engineering and Integration as a Foundation for Mission Engineering

    DTIC Science & Technology

    2015-09-01

    parallels the INCOSE definition to develop a system, it is important to note the focus on a complete life cycle balanced solution and the satisfaction of...unacceptable to relevant stakeholders. Principle 7 Managers should acknowledge the potential conflicts between (a) their own role as corporate...engineer in understanding the relationships between various needs and to identify similarities, differences, or redundancies. Another method for

  7. LPG as a Fuel for Diesel Engines-Experimental Investigations

    NASA Astrophysics Data System (ADS)

    Cristian Nutu, Nikolaos; Pana, Constantin; Negurescu, Niculae; Cernat, Alexandru; Mirica, Ionel

    2017-10-01

    The main objective of the paper is to reduce the pollutant emissions of a compression ignition engine, fuelling the engine with liquefied petroleum gas (LPG), aiming to maintain the energetic performances of the engine. To optimise the engine operation a corelation between the substitute ratio of the diesel fuel with LPG and the adjustments for the investigated regimens must be made in order to limit the maximum pressure and smoke level, knock and rough engine functioning, fuel consumption and the level of the pollutant emissions. The test bed situated in the Thermotechnics, Engines, Thermal Equipments and Refrigeration Instalations Department was adapted to be fuelled with liquefied petroleum gas. A conventional LPG fuelling instalation was adopted, consisting of a LPG tank, a vaporiser, conections between the tank and the vaporiser and a valve to adjust the gaseous fuel flow. Using the diesel-gas methode, in the intake manifold of the engine is injected LPG in gaseous aggregation state and the airr-LPG homogeneous mixture is ignited from the flame appeared in the diesel fuel sprays. To maintain the engine power at the same level like in the standard case of fuelling only with diesel fuel, for each investigated operate regimen the diesel fuel dose was reduced, being energetically substituted with LPG. The engine used for experimental investigations is a turbocharged truck diesel engine with a 10.34 dm3 displacement. The investigated working regimen was 40% load and 1750 rpm and the energetic substitute ratios of the diesel fuel with LPG was situated between [0-25%].

  8. Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei

    The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less

  9. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  10. An 8-Fold Parallel Reactor System for Combinatorial Catalysis Research

    PubMed Central

    Stoll, Norbert; Allwardt, Arne; Dingerdissen, Uwe

    2006-01-01

    Increasing economic globalization and mounting time and cost pressure on the development of new raw materials for the chemical industry as well as materials and environmental engineering constantly raise the demands on technologies to be used. Parallelization, miniaturization, and automation are the main concepts involved in increasing the rate of chemical and biological experimentation. PMID:17671621

  11. Innovative Language-Based & Object-Oriented Structured AMR Using Fortran 90 and OpenMP

    NASA Technical Reports Server (NTRS)

    Norton, C.; Balsara, D.

    1999-01-01

    Parallel adaptive mesh refinement (AMR) is an important numerical technique that leads to the efficient solution of many physical and engineering problems. In this paper, we describe how AMR programing can be performed in an object-oreinted way using the modern aspects of Fortran 90 combined with the parallelization features of OpenMP.

  12. Artificial muscles on heat

    NASA Astrophysics Data System (ADS)

    McKay, Thomas G.; Shin, Dong Ki; Percy, Steven; Knight, Chris; McGarry, Scott; Anderson, Iain A.

    2014-03-01

    Many devices and processes produce low grade waste heat. Some of these include combustion engines, electrical circuits, biological processes and industrial processes. To harvest this heat energy thermoelectric devices, using the Seebeck effect, are commonly used. However, these devices have limitations in efficiency, and usable voltage. This paper investigates the viability of a Stirling engine coupled to an artificial muscle energy harvester to efficiently convert heat energy into electrical energy. The results present the testing of the prototype generator which produced 200 μW when operating at 75°C. Pathways for improved performance are discussed which include optimising the electronic control of the artificial muscle, adjusting the mechanical properties of the artificial muscle to work optimally with the remainder of the system, good sealing, and tuning the resonance of the displacer to minimise the power required to drive it.

  13. Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1996-01-01

    As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.

  14. A novel poloxamers/hyaluronic acid in situ forming hydrogel for drug delivery: rheological, mucoadhesive and in vitro release properties.

    PubMed

    Mayol, Laura; Quaglia, Fabiana; Borzacchiello, Assunta; Ambrosio, Luigi; La Rotonda, Maria I

    2008-09-01

    The influence of hyaluronic acid (HA) on the gelation properties of poloxamers blends has been studied with the aim of engineering thermosensitive and mucoadhesive polymeric platforms for drug delivery. The gelation temperature (T(gel)), viscoelastic properties and mucoadhesive force of the systems were investigated and optimised by means of rheological analyses. Poloxamers micellar diameter was evaluated by photon correlation spectroscopy (PCS). Moreover in order to explore the feasibility of these platforms for drug delivery, the optimised systems were loaded with acyclovir and its release properties studied in vitro. By formulating poloxamers/HA platforms, at specific concentrations, it was possible to obtain a thermoreversible gel with a T(gel) close to body temperature. The addition of HA did not hamper the self assembling process of poloxamers just delaying the gelation temperature of few Celsius degrees. Furthermore, HA presence led to a strong increase of the poloxamer rheological properties thus indicating possible HA interactions with micelles through secondary bonds, such as hydrogen ones, which reinforce the gel structure. These interactions could also explain PCS results which show, in systems containing HA, aggregates with hydrodynamic diameters much higher than those of poloxamer micelles. Mucoadhesion experiments showed a rheological synergism between poloxamers/HA gels and mucin dispersion which led to a change of the flow behaviour from a quite Newtonian one of the separate solutions to a pseudoplastic one of their mixture. In vitro release experiments indicated that the optimised platform was able to prolong and control acyclovir release for more than 6h.

  15. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers.

    PubMed

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca

    2015-10-31

    To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.

  16. Warpage optimisation on the moulded part with straight-drilled and conformal cooling channels using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.

    2017-09-01

    In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.

  17. A review of Curtiss-Wright rotary engine developments with respect to general aviation potential

    NASA Technical Reports Server (NTRS)

    Jones, C.

    1979-01-01

    Aviation related rotary (Wankel-type) engine tests, possible growth directions and relevant developments at Curtiss-Wright have been reviewed. Automotive rotary engines including stratified charge are described and flight test results of rotary aircraft engines are presented. The current 300 HP engine prototype shows basic durability and competitive performance potential. Recent parallel developments have separately confirmed the geometric advantages of the rotary engine for direct injected unthrottled stratified charge. Specific fuel consumption equal to or better than pre- or swirl-chamber diesels, low emission and multi-fuel capability have been shown by rig tests of similar rotary engine.

  18. Research Study Towards a MEFFV Electric Armament System

    DTIC Science & Technology

    2004-01-01

    CHPSPerf Inputs Parameter Setting Engine Power (kW) 500 per engine Generator Power (kW) 500/generator Traction Motors Power (kW) 500/side # Battery Pack...Cells in Parallel 2 # Motors in Drive Train 2 Max Power of Traction Motors 200 Minimum Engine Power (kW) 50 Optimum Engine Power (kW) 750 Stop... motors . Other options were examined for the energy storage system. Of particular interest in this regard is the use of the CPA flywheel as the load

  19. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology

    PubMed Central

    Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.

    2017-01-01

    Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442

  20. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology.

    PubMed

    Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R

    2017-01-01

    We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.

  1. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    NASA Astrophysics Data System (ADS)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.

  2. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  3. Parallel Unsteady Turbopump Simulations for Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Kwak, Dochan; Chan, William

    2000-01-01

    This paper reports the progress being made towards complete turbo-pump simulation capability for liquid rocket engines. Space Shuttle Main Engine (SSME) turbo-pump impeller is used as a test case for the performance evaluation of the MPI and hybrid MPI/Open-MP versions of the INS3D code. Then, a computational model of a turbo-pump has been developed for the shuttle upgrade program. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbo-pump, which contains 136 zones with 35 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from time-accurate simulations with moving boundary capability, and the performance of the parallel versions of the code will be presented in the final paper.

  4. Injector Design Tool Improvements: User's manual for FDNS V.4.5

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Wei, Hong; Liu, Jiwen

    1998-01-01

    The major emphasis of the current effort is in the development and validation of an efficient parallel machine computational model, based on the FDNS code, to analyze the fluid dynamics of a wide variety of liquid jet configurations for general liquid rocket engine injection system applications. This model includes physical models for droplet atomization, breakup/coalescence, evaporation, turbulence mixing and gas-phase combustion. Benchmark validation cases for liquid rocket engine chamber combustion conditions will be performed for model validation purpose. Test cases may include shear coaxial, swirl coaxial and impinging injection systems with combinations LOXIH2 or LOXISP-1 propellant injector elements used in rocket engine designs. As a final goal of this project, a well tested parallel CFD performance methodology together with a user's operation description in a final technical report will be reported at the end of the proposed research effort.

  5. Parallel and Divergent Evolutionary Solutions for the Optimization of an Engineered Central Metabolism in Methylobacterium extorquens AM1

    PubMed Central

    Carroll, Sean Michael; Chubiz, Lon M.; Agashe, Deepa; Marx, Christopher J.

    2015-01-01

    Bioengineering holds great promise to provide fast and efficient biocatalysts for methanol-based biotechnology, but necessitates proven methods to optimize physiology in engineered strains. Here, we highlight experimental evolution as an effective means for optimizing an engineered Methylobacterium extorquens AM1. Replacement of the native formaldehyde oxidation pathway with a functional analog substantially decreased growth in an engineered Methylobacterium, but growth rapidly recovered after six hundred generations of evolution on methanol. We used whole-genome sequencing to identify the basis of adaptation in eight replicate evolved strains, and examined genomic changes in light of other growth and physiological data. We observed great variety in the numbers and types of mutations that occurred, including instances of parallel mutations at targets that may have been “rationalized” by the bioengineer, plus other “illogical” mutations that demonstrate the ability of evolution to expose unforeseen optimization solutions. Notably, we investigated mutations to RNA polymerase, which provided a massive growth benefit but are linked to highly aberrant transcriptional profiles. Overall, we highlight the power of experimental evolution to present genetic and physiological solutions for strain optimization, particularly in systems where the challenges of engineering are too many or too difficult to overcome via traditional engineering methods. PMID:27682084

  6. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  7. Differential Draining of Parallel-Fed Propellant Tanks in Morpheus and Apollo Flight

    NASA Technical Reports Server (NTRS)

    Hurlbert, Eric; Guardado, Hector; Hernandez, Humberto; Desai, Pooja

    2015-01-01

    Parallel-fed propellant tanks are an advantageous configuration for many spacecraft. Parallel-fed tanks allow the center of gravity (cg) to be maintained over the engine(s), as opposed to serial-fed propellant tanks which result in a cg shift as propellants are drained from tank one tank first opposite another. Parallel-fed tanks also allow for tank isolation if that is needed. Parallel tanks and feed systems have been used in several past vehicles including the Apollo Lunar Module. The design of the feedsystem connecting the parallel tank is critical to maintain balance in the propellant tanks. The design must account for and minimize the effect of manufacturing variations that could cause delta-p or mass flowrate differences, which would lead to propellant imbalance. Other sources of differential draining will be discussed. Fortunately, physics provides some self-correcting behaviors that tend to equalize any initial imbalance. The question concerning whether or not active control of propellant in each tank is required or can be avoided or not is also important to answer. In order to provide data on parallel-fed tanks and differential draining in flight for cryogenic propellants (as well as any other fluid), a vertical test bed (flying lander) for terrestrial use was employed. The Morpheus vertical test bed is a parallel-fed propellant tank system that uses passive design to keep the propellant tanks balanced. The system is operated in blow down. The Morpheus vehicle was instrumented with a capacitance level sensor in each propellant tank in order to measure the draining of propellants in over 34 tethered and 12 free flights. Morpheus did experience an approximately 20 lb/m imbalance in one pair of tanks. The cause of this imbalance will be discussed. This paper discusses the analysis, design, flight simulation vehicle dynamic modeling, and flight test of the Morpheus parallel-fed propellant. The Apollo LEM data is also examined in this summary report of the flight data.

  8. A joint numerical and experimental study of the jet of an aircraft engine installation with advanced techniques

    NASA Astrophysics Data System (ADS)

    Brunet, V.; Molton, P.; Bézard, H.; Deck, S.; Jacquin, L.

    2012-01-01

    This paper describes the results obtained during the European Union JEDI (JEt Development Investigations) project carried out in cooperation between ONERA and Airbus. The aim of these studies was first to acquire a complete database of a modern-type engine jet installation set under a wall-to-wall swept wing in various transonic flow conditions. Interactions between the engine jet, the pylon, and the wing were studied thanks to ¤advanced¥ measurement techniques. In parallel, accurate Reynolds-averaged Navier Stokes (RANS) simulations were carried out from simple ones with the Spalart Allmaras model to more complex ones like the DRSM-SSG (Differential Reynolds Stress Modef of Speziale Sarkar Gatski) turbulence model. In the end, Zonal-Detached Eddy Simulations (Z-DES) were also performed to compare different simulation techniques. All numerical results are accurately validated thanks to the experimental database acquired in parallel. This complete and complex study of modern civil aircraft engine installation allowed many upgrades in understanding and simulation methods to be obtained. Furthermore, a setup for engine jet installation studies has been validated for possible future works in the S3Ch transonic research wind-tunnel. The main conclusions are summed up in this paper.

  9. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  10. A distributed version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.; Curlett, Brian P.

    1993-01-01

    Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.

  11. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  12. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    NASA Astrophysics Data System (ADS)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  13. ABLE project: Development of an advanced lead-acid storage system for autonomous PV installations

    NASA Astrophysics Data System (ADS)

    Lemaire-Potteau, Elisabeth; Vallvé, Xavier; Pavlov, Detchko; Papazov, G.; Borg, Nico Van der; Sarrau, Jean-François

    In the advanced battery for low-cost renewable energy (ABLE) project, the partners have developed an advanced storage system for small and medium-size PV systems. It is composed of an innovative valve-regulated lead-acid (VRLA) battery, optimised for reliability and manufacturing cost, and an integrated regulator, for optimal battery management and anti-fraudulent use. The ABLE battery performances are comparable to flooded tubular batteries, which are the reference in medium-size PV systems. The ABLE regulator has several innovative features regarding energy management and modular series/parallel association. The storage system has been validated by indoor, outdoor and field tests, and it is expected that this concept could be a major improvement for large-scale implementation of PV within the framework of national rural electrification schemes.

  14. Petascale turbulence simulation using a highly parallel fast multipole method on GPUs

    NASA Astrophysics Data System (ADS)

    Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji

    2013-03-01

    This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.

  15. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  16. Multidisciplinary design optimisation of a recurve bow based on applications of the autogenetic design theory and distributed computing

    NASA Astrophysics Data System (ADS)

    Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor

    2012-08-01

    The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.

  17. Scenario-based, closed-loop model predictive control with application to emergency vehicle scheduling

    NASA Astrophysics Data System (ADS)

    Goodwin, Graham. C.; Medioli, Adrian. M.

    2013-08-01

    Model predictive control has been a major success story in process control. More recently, the methodology has been used in other contexts, including automotive engine control, power electronics and telecommunications. Most applications focus on set-point tracking and use single-sequence optimisation. Here we consider an alternative class of problems motivated by the scheduling of emergency vehicles. Here disturbances are the dominant feature. We develop a novel closed-loop model predictive control strategy aimed at this class of problems. We motivate, and illustrate, the ideas via the problem of fluid deployment of ambulance resources.

  18. List based prefetch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyle, Peter; Christ, Norman; Gara, Alan

    A list prefetch engine improves a performance of a parallel computing system. The list prefetch engine receives a current cache miss address. The list prefetch engine evaluates whether the current cache miss address is valid. If the current cache miss address is valid, the list prefetch engine compares the current cache miss address and a list address. A list address represents an address in a list. A list describes an arbitrary sequence of prior cache miss addresses. The prefetch engine prefetches data according to the list, if there is a match between the current cache miss address and the listmore » address.« less

  19. Injector element characterization methodology

    NASA Technical Reports Server (NTRS)

    Cox, George B., Jr.

    1988-01-01

    Characterization of liquid rocket engine injector elements is an important part of the development process for rocket engine combustion devices. Modern nonintrusive instrumentation for flow velocity and spray droplet size measurement, and automated, computer-controlled test facilities allow rapid, low-cost evaluation of injector element performance and behavior. Application of these methods in rocket engine development, paralleling their use in gas turbine engine development, will reduce rocket engine development cost and risk. The Alternate Turbopump (ATP) Hot Gas Systems (HGS) preburner injector elements were characterized using such methods, and the methodology and some of the results obtained will be shown.

  20. List based prefetch

    DOEpatents

    Boyle, Peter [Edinburgh, GB; Christ, Norman [Irvington, NY; Gara, Alan [Yorktown Heights, NY; Kim,; Changhoan, [San Jose, CA; Mawhinney, Robert [New York, NY; Ohmacht, Martin [Yorktown Heights, NY; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-08-28

    A list prefetch engine improves a performance of a parallel computing system. The list prefetch engine receives a current cache miss address. The list prefetch engine evaluates whether the current cache miss address is valid. If the current cache miss address is valid, the list prefetch engine compares the current cache miss address and a list address. A list address represents an address in a list. A list describes an arbitrary sequence of prior cache miss addresses. The prefetch engine prefetches data according to the list, if there is a match between the current cache miss address and the list address.

  1. Extending the boundaries of reverse engineering

    NASA Astrophysics Data System (ADS)

    Lawrie, Chris

    2002-04-01

    In today's market place the potential of Reverse Engineering as a time compression tool is commonly lost under its traditional definition. The term Reverse Engineering was coined way back at the advent of CMM machines and 3D CAD systems to describe the process of fitting surfaces to captured point data. Since these early beginnings, downstream hardware scanning and digitising systems have evolved in parallel with an upstream demand, greatly increasing the potential of a point cloud data set within engineering design and manufacturing processes. The paper will discuss the issues surrounding Reverse Engineering at the turn of the millennium.

  2. Knowledge-Based Parallel Performance Technology for Scientific Application Competitiveness Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malony, Allen D; Shende, Sameer

    The primary goal of the University of Oregon's DOE "œcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translationmore » of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.« less

  3. Applications of Emerging Parallel Optical Link Technology to High Energy Physics Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chramowicz, J.; Kwan, S.; Prosser, A.

    2011-09-01

    Modern particle detectors depend upon optical fiber links to deliver event data to upstream trigger and data processing systems. Future detector systems can benefit from the development of dense arrangements of high speed optical links emerging from the telecommunications and storage area network market segments. These links support data transfers in each direction at rates up to 120 Gbps in packages that minimize or even eliminate edge connector requirements. Emerging products include a class of devices known as optical engines which permit assembly of the optical transceivers in close proximity to the electrical interfaces of ASICs and FPGAs which handlemore » the data in parallel electrical format. Such assemblies will reduce required printed circuit board area and minimize electromagnetic interference and susceptibility. We will present test results of some of these parallel components and report on the development of pluggable FPGA Mezzanine Cards equipped with optical engines to provide to collaborators on the Versatile Link Common Project for the HI-LHC at CERN.« less

  4. Engineered plant biomass particles coated with biological agents

    DOEpatents

    Dooley, James H.; Lanning, David N.

    2014-06-24

    Plant biomass particles coated with a biological agent such as a bacterium or seed, characterized by a length dimension (L) aligned substantially parallel to a grain direction and defining a substantially uniform distance along the grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) normal to W and L. In particular, the L.times.H dimensions define a pair of substantially parallel side surfaces characterized by substantially intact longitudinally arrayed fibers, the W.times.H dimensions define a pair of substantially parallel end surfaces characterized by crosscut fibers and end checking between fibers, and the L.times.W dimensions define a pair of substantially parallel top and bottom surfaces.

  5. Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui

    2017-05-01

    The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.

  6. Overview of the NCC

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey

    2001-01-01

    A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between then NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Glenn Research Center (LeRC), and Pratt & Whitney (P&W). The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration. The development of the NCC beta version was essentially completed in June 1998. Technical details of the NCC elements are given in the Reference List. Elements such as the baseline flow solver, turbulence module, and the chemistry module, have been extensively validated; and their parallel performance on large-scale parallel systems has been evaluated and optimized. However the scalar PDF module and the Spray module, as well as their coupling with the baseline flow solver, were developed in a small-scale distributed computing environment. As a result, the validation of the NCC beta version as a whole was quite limited. Current effort has been focused on the validation of the integrated code and the evaluation/optimization of its overall performance on large-scale parallel systems.

  7. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  8. A design concept of parallel elasticity extracted from biological muscles for engineered actuators.

    PubMed

    Chen, Jie; Jin, Hongzhe; Iida, Fumiya; Zhao, Jie

    2016-08-23

    Series elastic actuation that takes inspiration from biological muscle-tendon units has been extensively studied and used to address the challenges (e.g. energy efficiency, robustness) existing in purely stiff robots. However, there also exists another form of passive property in biological actuation, parallel elasticity within muscles themselves, and our knowledge of it is limited: for example, there is still no general design strategy for the elasticity profile. When we look at nature, on the other hand, there seems a universal agreement in biological systems: experimental evidence has suggested that a concave-upward elasticity behaviour is exhibited within the muscles of animals. Seeking to draw possible design clues for elasticity in parallel with actuators, we use a simplified joint model to investigate the mechanisms behind this biologically universal preference of muscles. Actuation of the model is identified from general biological joints and further reduced with a specific focus on muscle elasticity aspects, for the sake of easy implementation. By examining various elasticity scenarios, one without elasticity and three with elasticity of different profiles, we find that parallel elasticity generally exerts contradictory influences on energy efficiency and disturbance rejection, due to the mechanical impedance shift thus caused. The trade-off analysis between them also reveals that concave parallel elasticity is able to achieve a more advantageous balance than linear and convex ones. It is expected that the results could contribute to our further understanding of muscle elasticity and provide a theoretical guideline on how to properly design parallel elasticity behaviours for engineering systems such as artificial actuators and robotic joints.

  9. Design modification and optimisation of the perfusion system of a tri-axial bioreactor for tissue engineering.

    PubMed

    Hussein, Husnah; Williams, David J; Liu, Yang

    2015-07-01

    A systematic design of experiments (DOE) approach was used to optimize the perfusion process of a tri-axial bioreactor designed for translational tissue engineering exploiting mechanical stimuli and mechanotransduction. Four controllable design parameters affecting the perfusion process were identified in a cause-effect diagram as potential improvement opportunities. A screening process was used to separate out the factors that have the largest impact from the insignificant ones. DOE was employed to find the settings of the platen design, return tubing configuration and the elevation difference that minimise the load on the pump and variation in the perfusion process and improve the controllability of the perfusion pressures within the prescribed limits. DOE was very effective for gaining increased knowledge of the perfusion process and optimizing the process for improved functionality. It is hypothesized that the optimized perfusion system will result in improved biological performance and consistency.

  10. Implementation of the anaerobic digestion model (ADM1) in the PHREEQC chemistry engine.

    PubMed

    Huber, Patrick; Neyret, Christophe; Fourest, Eric

    2017-09-01

    Anaerobic digestion is state-of-the-art technology to treat sludge and effluents from various industries. Modelling and optimisation of digestion operations can be advantageously performed using the anaerobic digestion model (ADM1) from the International Water Association. The ADM1, however, lacks a proper physico-chemical framework, which makes it difficult to consider wastewater of complex ionic composition and supersaturation phenomena. In this work, we present a direct implementation of the ADM1 within the PHREEQC chemistry engine. This makes it possible to handle ionic strength effects and ion-pairing. Thus, multiple mineral precipitation phenomena can be handled while resolving the ADM1. All these features can be accessed with very little programming effort, while retaining the full power and flexibility of PHREEQC. The distributed PHREEQC code can be easily interfaced with process simulation software for future plant-wide simulation of both wastewater and sludge treatment.

  11. Finding and Exploring Health Information with a Slider-Based User Interface.

    PubMed

    Pang, Patrick Cheong-Iao; Verspoor, Karin; Pearce, Jon; Chang, Shanton

    2016-01-01

    Despite the fact that search engines are the primary channel to access online health information, there are better ways to find and explore health information on the web. Search engines are prone to problems when they are used to find health information. For instance, users have difficulties in expressing health scenarios with appropriate search keywords, search results are not optimised for medical queries, and the search process does not account for users' literacy levels and reading preferences. In this paper, we describe our approach to addressing these problems by introducing a novel design using a slider-based user interface for discovering health information without the need for precise search keywords. The user evaluation suggests that the interface is easy to use and able to assist users in the process of discovering new information. This study demonstrates the potential value of adopting slider controls in the user interface of health websites for navigation and information discovery.

  12. Feasibility study of a pressure-fed engine for a water recoverable space shuttle booster. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The activities leading to a tentative concept selection for a pressure-fed engine and propulsion support are outlined. Multiple engine concepts were evaluted through parallel engine major component and system analyses. Booster vehicle coordination, tradeoffs, and technology/development aspects are included. The concept selected for further evaluation has a regeneratively cooled combustion chamber and nozzle in conjuction with an impinging element injector. The propellants chosen are LOX/RP-1, and combustion stabilizing baffles are used to assure dynamic combustion stability.

  13. Specification and Analysis of Parallel Machine Architecture

    DTIC Science & Technology

    1990-03-17

    Parallel Machine Architeture C.V. Ramamoorthy Computer Science Division Dept. of Electrical Engineering and Computer Science University of California...capacity. (4) Adaptive: The overhead in resolution of deadlocks, etc. should be in proportion to their frequency. (5) Avoid rollbacks: Rollbacks can be...snapshots of system state graphically at a rate proportional to simulation time. Some of the examples are as follow: (1) When the simulation clock of

  14. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  15. Vibration isolation design for periodically stiffened shells by the wave finite element method

    NASA Astrophysics Data System (ADS)

    Hong, Jie; He, Xueqing; Zhang, Dayi; Zhang, Bing; Ma, Yanhong

    2018-04-01

    Periodically stiffened shell structures are widely used due to their excellent specific strength, in particular for aeronautical and astronautical components. This paper presents an improved Wave Finite Element Method (FEM) that can be employed to predict the band-gap characteristics of stiffened shell structures efficiently. An aero-engine casing, which is a typical periodically stiffened shell structure, was employed to verify the validation and efficiency of the Wave FEM. Good agreement has been found between the Wave FEM and the classical FEM for different boundary conditions. One effective wave selection method based on the Wave FEM has thus been put forward to filter the radial modes of a shell structure. Furthermore, an optimisation strategy by the combination of the Wave FEM and genetic algorithm was presented for periodically stiffened shell structures. The optimal out-of-plane band gap and the mass of the whole structure can be achieved by the optimisation strategy under an aerodynamic load. Results also indicate that geometric parameters of stiffeners can be properly selected that the out-of-plane vibration attenuates significantly in the frequency band of interest. This study can provide valuable references for designing the band gaps of vibration isolation.

  16. International online support to process optimisation and operation decisions.

    PubMed

    Onnerth, T B; Eriksson, J

    2002-01-01

    The information level at all technical facilities has developed from almost nothing 30-40 years ago to advanced IT--Information Technology--systems based on both chemical and mechanical on-line sensors for process and equipment. Still the basic part of information is to get the right data at the right time for the decision to be made. Today a large amount of operational data is available at almost any European wastewater treatment plant, from laboratory and SCADA. The difficult part is to determine which data to keep, which to use in calculations and how and where to make data available. With the STARcontrol system it is possible to separate only process relevant data to use for on-line control and reporting at engineering level, to optimise operation. Furthermore, the use of IT makes it possible to communicate internationally, with full access to the whole amount of data on the single plant. In this way, expert supervision can be both very local in local language e.g. Polish and at the same time very professional with Danish experts advising on Danish processes in Poland or Sweden where some of the 12 STARcontrol systems are running.

  17. Aircraft noise prediction

    NASA Astrophysics Data System (ADS)

    Filippone, Antonio

    2014-07-01

    This contribution addresses the state-of-the-art in the field of aircraft noise prediction, simulation and minimisation. The point of view taken in this context is that of comprehensive models that couple the various aircraft systems with the acoustic sources, the propagation and the flight trajectories. After an exhaustive review of the present predictive technologies in the relevant fields (airframe, propulsion, propagation, aircraft operations, trajectory optimisation), the paper addresses items for further research and development. Examples are shown for several airplanes, including the Airbus A319-100 (CFM engines), the Bombardier Dash8-Q400 (PW150 engines, Dowty R408 propellers) and the Boeing B737-800 (CFM engines). Predictions are done with the flight mechanics code FLIGHT. The transfer function between flight mechanics and the noise prediction is discussed in some details, along with the numerical procedures for validation and verification. Some code-to-code comparisons are shown. It is contended that the field of aircraft noise prediction has not yet reached a sufficient level of maturity. In particular, some parametric effects cannot be investigated, issues of accuracy are not currently addressed, and validation standards are still lacking.

  18. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  19. Hybrid-Vehicle Transmission System

    NASA Technical Reports Server (NTRS)

    Lupo, G.; Dotti, G.

    1985-01-01

    Continuously-variable transmission system for hybrid vehicles couples internal-combustion engine and electric motor section, either individually or in parallel, to power vehicle wheels during steering and braking.

  20. Software Engineering for Scientific Computer Simulations

    NASA Astrophysics Data System (ADS)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  1. Ti/Al Design/Cost Trade-Off Analysis

    DTIC Science & Technology

    1978-10-01

    evaluate the applV!ati’an of selected titanium aluuinide alloys to both dynamic and static components of aircraft gas turbine engines . Mr. D. 0. Nash...the development of advanced aircraft gas turbine engines , a continuing objective has been to develop lightweight, high-performance designs. A parallel... engines for the design/cost trade-off study are as follows: Dynamic Components "* F1O1 Fourth-Stage Compressor Blade "* JlO1 Low Pressure Turbine Blade

  2. Collaborative engineering and design management for the Hobby-Eberly Telescope tracker upgrade

    NASA Astrophysics Data System (ADS)

    Mollison, Nicholas T.; Hayes, Richard J.; Good, John M.; Booth, John A.; Savage, Richard D.; Jackson, John R.; Rafal, Marc D.; Beno, Joseph H.

    2010-07-01

    The engineering and design of systems as complex as the Hobby-Eberly Telescope's* new tracker require that multiple tasks be executed in parallel and overlapping efforts. When the design of individual subsystems is distributed among multiple organizations, teams, and individuals, challenges can arise with respect to managing design productivity and coordinating successful collaborative exchanges. This paper focuses on design management issues and current practices for the tracker design portion of the Hobby-Eberly Telescope Wide Field Upgrade project. The scope of the tracker upgrade requires engineering contributions and input from numerous fields including optics, instrumentation, electromechanics, software controls engineering, and site-operations. Successful system-level integration of tracker subsystems and interfaces is critical to the telescope's ultimate performance in astronomical observation. Software and process controls for design information and workflow management have been implemented to assist the collaborative transfer of tracker design data. The tracker system architecture and selection of subsystem interfaces has also proven to be a determining factor in design task formulation and team communication needs. Interface controls and requirements change controls will be discussed, and critical team interactions are recounted (a group-participation Failure Modes and Effects Analysis [FMEA] is one of special interest). This paper will be of interest to engineers, designers, and managers engaging in multi-disciplinary and parallel engineering projects that require coordination among multiple individuals, teams, and organizations.

  3. Efficient computation of hashes

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.

    2014-06-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  4. Engineered plant biomass particles coated with bioactive agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dooley, James H; Lanning, David N

    Plant biomass particles coated with a bioactive agent such as a fertilizer or pesticide, characterized by a length dimension (L) aligned substantially parallel to a grain direction and defining a substantially uniform distance along the grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) normal to W and L. In particular, the L.times.H dimensions define a pair of substantially parallel side surfaces characterized by substantially intact longitudinally arrayed fibers, the W.times.H dimensions define a pair of substantially parallel end surfaces characterized by crosscut fibers and end checking between fibers, and the L.times.Wmore » dimensions define a pair of substantially parallel top and bottom surfaces.« less

  5. Symposium on Parallel Computational Methods for Large-scale Structural Analysis and Design, 2nd, Norfolk, VA, US

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)

    1993-01-01

    Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.

  6. Program For Parallel Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  7. Genome-wide mapping of mutations at single-nucleotide resolution for protein, metabolic and genome engineering.

    PubMed

    Garst, Andrew D; Bassalo, Marcelo C; Pines, Gur; Lynch, Sean A; Halweg-Edwards, Andrea L; Liu, Rongming; Liang, Liya; Wang, Zhiwen; Zeitoun, Ramsey; Alexander, William G; Gill, Ryan T

    2017-01-01

    Improvements in DNA synthesis and sequencing have underpinned comprehensive assessment of gene function in bacteria and eukaryotes. Genome-wide analyses require high-throughput methods to generate mutations and analyze their phenotypes, but approaches to date have been unable to efficiently link the effects of mutations in coding regions or promoter elements in a highly parallel fashion. We report that CRISPR-Cas9 gene editing in combination with massively parallel oligomer synthesis can enable trackable editing on a genome-wide scale. Our method, CRISPR-enabled trackable genome engineering (CREATE), links each guide RNA to homologous repair cassettes that both edit loci and function as barcodes to track genotype-phenotype relationships. We apply CREATE to site saturation mutagenesis for protein engineering, reconstruction of adaptive laboratory evolution experiments, and identification of stress tolerance and antibiotic resistance genes in bacteria. We provide preliminary evidence that CREATE will work in yeast. We also provide a webtool to design multiplex CREATE libraries.

  8. Computation of Coupled Thermal-Fluid Problems in Distributed Memory Environment

    NASA Technical Reports Server (NTRS)

    Wei, H.; Shang, H. M.; Chen, Y. S.

    2001-01-01

    The thermal-fluid coupling problems are very important to aerospace and engineering applications. Instead of analyzing heat transfer and fluid flow separately, this study merged two well-accepted engineering solution methods, SINDA for thermal analysis and FDNS for fluid flow simulation, into a unified multi-disciplinary thermal fluid prediction method. A fully conservative patched grid interface algorithm for arbitrary two-dimensional and three-dimensional geometry has been developed. The state-of-the-art parallel computing concept was used to couple SINDA and FDNS for the communication of boundary conditions through PVM (Parallel Virtual Machine) libraries. Therefore, the thermal analysis performed by SINDA and the fluid flow calculated by FDNS are fully coupled to obtain steady state or transient solutions. The natural convection between two thick-walled eccentric tubes was calculated and the predicted results match the experiment data perfectly. A 3-D rocket engine model and a real 3-D SSME geometry were used to test the current model, and the reasonable temperature field was obtained.

  9. Determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation

    DOEpatents

    Blocksome, Michael A [Rochester, MN

    2011-12-20

    Methods, apparatus, and products are disclosed for determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation that includes, for each compute node in the set: initializing a barrier counter with no counter underflow interrupt; configuring, upon entering the barrier operation, the barrier counter with a value in dependence upon a number of compute nodes in the set; broadcasting, by a DMA engine on the compute node to each of the other compute nodes upon entering the barrier operation, a barrier control packet; receiving, by the DMA engine from each of the other compute nodes, a barrier control packet; modifying, by the DMA engine, the value for the barrier counter in dependence upon each of the received barrier control packets; exiting the barrier operation if the value for the barrier counter matches the exit value.

  10. Clean catalytic combustor program

    NASA Technical Reports Server (NTRS)

    Ekstedt, E. E.; Lyon, T. F.; Sabla, P. E.; Dodds, W. J.

    1983-01-01

    A combustor program was conducted to evolve and to identify the technology needed for, and to establish the credibility of, using combustors with catalytic reactors in modern high-pressure-ratio aircraft turbine engines. Two selected catalytic combustor concepts were designed, fabricated, and evaluated. The combustors were sized for use in the NASA/General Electric Energy Efficient Engine (E3). One of the combustor designs was a basic parallel-staged double-annular combustor. The second design was also a parallel-staged combustor but employed reverse flow cannular catalytic reactors. Subcomponent tests of fuel injection systems and of catalytic reactors for use in the combustion system were also conducted. Very low-level pollutant emissions and excellent combustor performance were achieved. However, it was obvious from these tests that extensive development of fuel/air preparation systems and considerable advancement in the steady-state operating temperature capability of catalytic reactor materials will be required prior to the consideration of catalytic combustion systems for use in high-pressure-ratio aircraft turbine engines.

  11. Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.

    PubMed

    Basanta-Val, Pablo; Sánchez-Fernández, Luis

    2018-06-01

    The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.

  12. A parallel computing engine for a class of time critical processes.

    PubMed

    Nabhan, T M; Zomaya, A Y

    1997-01-01

    This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.

  13. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  14. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  15. A CONCEPTUAL FRAMEWORK FOR MANAGING RADIATION DOSE TO PATIENTS IN DIAGNOSTIC RADIOLOGY USING REFERENCE DOSE LEVELS.

    PubMed

    Almén, Anja; Båth, Magnus

    2016-06-01

    The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Milestones in Software Engineering and Knowledge Engineering History: A Comparative Review

    PubMed Central

    del Águila, Isabel M.; Palma, José; Túnez, Samuel

    2014-01-01

    We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because “those who cannot remember the past are condemned to repeat it.” This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one. PMID:24624046

  17. Milestones in software engineering and knowledge engineering history: a comparative review.

    PubMed

    del Águila, Isabel M; Palma, José; Túnez, Samuel

    2014-01-01

    We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because "those who cannot remember the past are condemned to repeat it." This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.

  18. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  19. Basis for the development of sustainable optimisation indicators for activated sludge wastewater treatment plants in the Republic of Ireland.

    PubMed

    Gordon, G T; McCann, B P

    2015-01-01

    This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.

  20. Model-based design of an agricultural biogas plant: application of anaerobic digestion model no.1 for an improved four chamber scheme.

    PubMed

    Wett, B; Schoen, M; Phothilangka, P; Wackerle, F; Insam, H

    2007-01-01

    Different digestion technologies for various substrates are addressed by the generic process description of Anaerobic Digestion Model No. 1. In the case of manure or agricultural wastes a priori knowledge about the substrate in terms of ADM1 compounds is lacking and influent characterisation becomes a major issue. The actual project has been initiated for promotion of biogas technology in agriculture and for expansion of profitability also to rather small capacity systems. In order to avoid costly individual planning and installation of each facility a standardised design approach needs to be elaborated. This intention pleads for bio kinetic modelling as a systematic tool for process design and optimisation. Cofermentation under field conditions was observed, quality data and flow data were recorded and mass flow balances were calculated. In the laboratory different substrates have been digested separately in parallel under specified conditions. A configuration of four ADM1 model reactors was set up. Model calibration identified disintegration rate, decay rates for sugar degraders and half saturation constant for sugar as the three most sensitive parameters showing values (except the latter) about one order of magnitude higher than default parameters. Finally, the model is applied to the comparison of different reactor configurations and volume partitions. Another optimisation objective is robustness and load flexibility, i.e. the same configuration should be adaptive to different load situations only by a simple recycle control in order to establish a standardised design.

  1. A Survey of Aircraft Ground Support Equipment Utilization and Oil Condition at the Mandatory Six Month Inspection

    DTIC Science & Technology

    2016-09-30

    In parallel with the oil change interval study an engineering evaluation of a handheld oil condition analyzer was conducted. Within the limitations...of the study of diesel engine powered AGE assets at two U.S. Air Force locations, assets monitored were not impacted by eliminating the 6-month oil...limitations of the study , conclusions can be made from the cumulative knowledge of analyzing crankcase lubricants of diesel engine powered AGE assets

  2. Propulsion technology for an advanced subsonic transport

    NASA Technical Reports Server (NTRS)

    Beheim, M. A.; Antl, R. J.; Povolny, J. H.

    1972-01-01

    Engine design studies for future subsonic commercial transport aircraft were conducted in parallel with airframe studies. These studies surveyed a broad distribution of design variables, including aircraft configuration, payload, range, and speed, with particular emphasis on reducing noise and exhaust emissions without severe economic and performance penalties. The results indicated that an engine for an advanced transport would be similar to the currently emerging turbofan engines. Application of current technology in the areas of noise suppression and combustors imposed severe performance and economic penalties.

  3. An analysis of cryotrap heat exchanger performance test data (400 area) and recommendations for a system to handle Apollo RCS engines

    NASA Technical Reports Server (NTRS)

    Rakow, A.

    1983-01-01

    The current arrangement of a Platecoil heat exchanger which uses LN2 on the inside of parallel tubes, in counter flow to the test cell engine exhaust gases which are drawn through a box surrounding the plates by the existing vacuum blowers is examined. As a result of inadequate performance and special test data it was decided to redesign the system to accommodate an Apollo RCS engine.

  4. Development and optimisation of atorvastatin calcium loaded self-nanoemulsifying drug delivery system (SNEDDS) for enhancing oral bioavailability: in vitro and in vivo evaluation.

    PubMed

    Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M

    2017-05-01

    The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.

  5. A radiation-tolerant electronic readout system for portal imaging

    NASA Astrophysics Data System (ADS)

    Östling, J.; Brahme, A.; Danielsson, M.; Iacobaeus, C.; Peskov, V.

    2004-06-01

    A new electronic portal imaging device, EPID, is under development at the Karolinska Institutet and the Royal Institute of Technology. Due to considerable demands on radiation tolerance in the radiotherapy environment, a dedicated electronic readout system has been designed. The most interesting aspect of the readout system is that it allows to read out ˜1000 pixels in parallel, with all electronics placed outside the radiation beam—making the detector more radiation resistant. In this work we are presenting the function of a small prototype (6×100 pixels) of the electronic readout board that has been tested. Tests were made with continuous X-rays (10-60 keV) and with α particles. The results show that, without using an optimised gas mixture and with an early prototype only, the electronic readout system still works very well.

  6. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  7. Class and Home Problems: Modeling of an Industrial Anaerobic Digester: A Case Study for Undergraduate Students

    ERIC Educational Resources Information Center

    Durruty, Ignacio; Ayude, María A.

    2014-01-01

    The case study discussed in this work is used at the chemical reaction engineering course, offered in fifth-year of the chemical engineering undergraduate program at National University of Mar del Plata (UNMdP). A serial-parallel reaction system based on the anaerobic degradation of particulate-containing potato processing wastewater is presented.…

  8. Improving Systems Engineering Effectiveness in Rapid Response Development Environments

    DTIC Science & Technology

    2012-06-02

    environments where large, complex, brownfield systems of systems are evolved through parallel development of new capabilities in response to external, time...license 14. ABSTRACT Systems engineering is often ineffective in development environments where large, complex, brownfield systems of systems are...IEEE Press, Hoboken, NJ, 2008 [18] Boehm, B.: Applying the Incremental Commitment Model to Brownfield Systems Development, Proceedings, CSER 2009

  9. Advanced detection, isolation, and accommodation of sensor failures in turbofan engines: Real-time microcomputer implementation

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1990-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.

  10. Model Driven Engineering

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  11. Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine

    NASA Astrophysics Data System (ADS)

    Erdogan, Gamze; Yavuz, Mahmut

    2017-12-01

    The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.

  12. Design Optimisation of a Magnetic Field Based Soft Tactile Sensor

    PubMed Central

    Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert

    2017-01-01

    This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787

  13. Opus: A Coordination Language for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Haines, Matthew; Mehrotra, Piyush; Zima, Hans; vanRosendale, John

    1997-01-01

    Data parallel languages, such as High Performance fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.

  14. Singularity and workspace analysis of three isoconstrained parallel manipulators with schoenflies motion

    NASA Astrophysics Data System (ADS)

    Lee, Po-Chih; Lee, Jyh-Jone

    2012-06-01

    This paper presents the analysis of three parallel manipulators with Schoenflies-motion. Each parallel manipulator possesses two limbs in structure and the end-effector has three DOFs (degree of freedom) in the translational motion and one DOF in rotational motion about a given direction axis with respect to the world coordinate system. The three isoconstrained parallel manipulators have the structures denoted as C{u/u}UwHw-//-C{v/v}UwHw, CuR{u/u}Uhw-//-CvR{v/v}Uhw and CuPuUhw-//-CvPvUhw. The kinematic equations are first introduced for each manipulator. Then, Jacobian matrix, singularity, workspace, and performance index for each mechanism are subsequently derived and analysed for the first time. The results can be helpful for the engineers to evaluate such kind of parallel robots for possible application in industry where pick-and-place motion is required.

  15. A software architecture for multidisciplinary applications: Integrating task and data parallelism

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Vanrosendale, John; Zima, Hans

    1994-01-01

    Data parallel languages such as Vienna Fortran and HPF can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are of a multidisciplinary and heterogeneous nature and thus do not fit well into the data parallel paradigm. In this paper we present new Fortran 90 language extensions to fill this gap. Tasks can be spawned as asynchronous activities in a homogeneous or heterogeneous computing environment; they interact by sharing access to Shared Data Abstractions (SDA's). SDA's are an extension of Fortran 90 modules, representing a pool of common data, together with a set of Methods for controlled access to these data and a mechanism for providing persistent storage. Our language supports the integration of data and task parallelism as well as nested task parallelism and thus can be used to express multidisciplinary applications in a natural and efficient way.

  16. Design of a Miniature Pulse Tube Cryocooler for Space Applications

    NASA Astrophysics Data System (ADS)

    Trollier, T.; Ravex, A.; Charles, I.; Duband, L.; Mullié, J.; Bruins, P.; Benschop, T.; Linder, M.

    2004-06-01

    An Engineering Model (EM) of a Miniature Pulse Tube Cooler (MPTC) has been designed and manufactured. The expected performance of the MPTC were 1240 mW heat lift at 80 K with 288 K ambient temperature and 40 Watts rms maximum input power to the compressor motors. The EM is a U shape configuration operated with an inertance tube. The design and optimisation of the compressor and the Pulse Tube cold finger are described. The thermal performance test results are presented and discussed as well. This work is performed within a Technological Research Project (TRP) funded by ESA (Contract 14896/00/NL/PA).

  17. pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data

    NASA Astrophysics Data System (ADS)

    Shkurti, Ardita; Goni, Ramon; Andrio, Pau; Breitmoser, Elena; Bethune, Iain; Orozco, Modesto; Laughton, Charles A.

    The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD) simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced) written in Python.

  18. Mechanical behaviour of a fibrous scaffold for ligament tissue engineering: finite elements analysis vs. X-ray tomography imaging.

    PubMed

    Laurent, Cédric P; Latil, Pierre; Durville, Damien; Rahouadj, Rachid; Geindreau, Christian; Orgéas, Laurent; Ganghoffer, Jean-François

    2014-12-01

    The use of biodegradable scaffolds seeded with cells in order to regenerate functional tissue-engineered substitutes offers interesting alternative to common medical approaches for ligament repair. Particularly, finite element (FE) method enables the ability to predict and optimise both the macroscopic behaviour of these scaffolds and the local mechanic signals that control the cell activity. In this study, we investigate the ability of a dedicated FE code to predict the geometrical evolution of a new braided and biodegradable polymer scaffold for ligament tissue engineering by comparing scaffold geometries issued from FE simulations and from X-ray tomographic imaging during a tensile test. Moreover, we compare two types of FE simulations the initial geometries of which are issued either from X-ray imaging or from a computed idealised configuration. We report that the dedicated FE simulations from an idealised reference configuration can be reasonably used in the future to predict the global and local mechanical behaviour of the braided scaffold. A valuable and original dialog between the fields of experimental and numerical characterisation of such fibrous media is thus achieved. In the future, this approach should enable to improve accurate characterisation of local and global behaviour of tissue-engineering scaffolds. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Block-Level Added Redundancy Explicit Authentication for Parallelized Encryption and Integrity Checking of Processor-Memory Transactions

    NASA Astrophysics Data System (ADS)

    Elbaz, Reouven; Torres, Lionel; Sassatelli, Gilles; Guillemin, Pierre; Bardouillet, Michel; Martinez, Albert

    The bus between the System on Chip (SoC) and the external memory is one of the weakest points of computer systems: an adversary can easily probe this bus in order to read private data (data confidentiality concern) or to inject data (data integrity concern). The conventional way to protect data against such attacks and to ensure data confidentiality and integrity is to implement two dedicated engines: one performing data encryption and another data authentication. This approach, while secure, prevents parallelizability of the underlying computations. In this paper, we introduce the concept of Block-Level Added Redundancy Explicit Authentication (BL-AREA) and we describe a Parallelized Encryption and Integrity Checking Engine (PE-ICE) based on this concept. BL-AREA and PE-ICE have been designed to provide an effective solution to ensure both security services while allowing for full parallelization on processor read and write operations and optimizing the hardware resources. Compared to standard encryption which ensures only confidentiality, we show that PE-ICE additionally guarantees code and data integrity for less than 4% of run-time performance overhead.

  20. Optimizing SIEM Throughput on the Cloud Using Parallelization.

    PubMed

    Alam, Masoom; Ihsan, Asif; Khan, Muazzam A; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, Muhammad Khurram; Farooq, Sajid

    2016-01-01

    Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.

  1. INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Larkman, David J.; Nunes, Rita G.

    2007-04-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.

  2. DMA shared byte counters in a parallel computer

    DOEpatents

    Chen, Dong; Gara, Alan G.; Heidelberger, Philip; Vranas, Pavlos

    2010-04-06

    A parallel computer system is constructed as a network of interconnected compute nodes. Each of the compute nodes includes at least one processor, a memory and a DMA engine. The DMA engine includes a processor interface for interfacing with the at least one processor, DMA logic, a memory interface for interfacing with the memory, a DMA network interface for interfacing with the network, injection and reception byte counters, injection and reception FIFO metadata, and status registers and control registers. The injection FIFOs maintain memory locations of the injection FIFO metadata memory locations including its current head and tail, and the reception FIFOs maintain the reception FIFO metadata memory locations including its current head and tail. The injection byte counters and reception byte counters may be shared between messages.

  3. Analysis of Parallel Burn Without Crossfeed TSTO RLV Architectures and Comparison to Parallel Burn With Crossfeed and Series Burn Architectures

    NASA Technical Reports Server (NTRS)

    Smith, Garrett; Phillips, Alan

    2002-01-01

    There are currently three dominant TSTO class architectures. These are Series Burn (SB), Parallel Burn with crossfeed (PBw/cf), and Parallel Burn without crossfeed (PBncf). The goal of this study was to determine what factors uniquely affect PBncf architectures, how each of these factors interact, and to determine from a performance perspective whether a PBncf vehicle could be competitive with a PBw/cf or SB vehicle using equivalent technology and assumptions. In all cases, performance was evaluated on a relative basis for a fixed payload and mission by comparing gross and dry vehicle masses of a closed vehicle. Propellant combinations studied were LOX: LH2 propelled orbiter and booster (HH) and LOX: Kerosene booster with LOX: LH2 orbiter (KH). The study conclusions were: 1) a PBncf orbiter should be throttled as deeply as possible after launch until the staging point. 2) a detailed structural model is essential to accurate architecture analysis and evaluation. 3) a PBncf TSTO architecture is feasible for systems that stage at mach 7. 3a) HH architectures can achieve a mass growth relative to PBw/cf of < 20%. 3b) KH architectures can achieve a mass growth relative to Series Burn of < 20%. 4) center of gravity (CG) control will be a major issue for a PBncf vehicle, due to the low orbiter specific thrust to weight ratio and to the position of the orbiter required to align the nozzle heights at liftoff. 5 ) thrust to weight ratios of 1.3 at liftoff and between 1.0 and 0.9 when staging at mach 7 appear to be close to ideal for PBncf vehicles. 6) performance for all vehicles studied is better when staged at mach 7 instead of mach 5. The study showed that a Series Burn architecture has the lowest gross mass for HH cases, and has the lowest dry mass for KH cases. The potential disadvantages of SB are the required use of an air-start for the orbiter engines and potential CG control issues. A Parallel Burn with crossfeed architecture solves both these problems, but the mechanics of a large bipropellant crossfeed system pose significant technical difficulties. Parallel Burn without crossfeed vehicles start both booster and orbiter engines on the ground and thus avoid both the risk of orbiter air-start and the complexity of a crossfeed system. The drawback is that the orbiter must use 20% to 35% of its propellant before reaching the staging point. This induces a weight penalty in the orbiter in order to carry additional propellant, which causes a further weight penalty in the booster to achieve the same staging point. One way to reduce the orbiter propellant consumption during the first stage is to throttle down the orbiter engines as much as possible. Another possibility is to use smaller or fewer engines. Throttling the orbiter engines soon after liftoff minimizes CG control problems due to a low orbiter liftoff thrust, but may result in an unnecessarily high orbiter thrust after staging. Reducing the number or size of engines size may cause CG control problems and drift at launch. The study suggested possible methods to maximize performance of PBncf vehicle architectures in order to meet mission design requirements.

  4. Engineering and Biology: Counsel for a Continued Relationship

    PubMed Central

    Levy, Arnon; Siegal, Mark L.; Soyer, Orkun S.; Wagner, Andreas

    2015-01-01

    Biologists frequently draw on ideas and terminology from engineering. Evolutionary systems biology—with its circuits, switches, and signal processing—is no exception. In parallel with the frequent links drawn between biology and engineering, there is ongoing criticism against this cross-fertilization, using the argument that over-simplistic metaphors from engineering are likely to mislead us as engineering is fundamentally different from biology. In this article, we clarify and reconfigure the link between biology and engineering, presenting it in a more favorable light. We do so by, first, arguing that critics operate with a narrow and incorrect notion of how engineering actually works, and of what the reliance on ideas from engineering entails. Second, we diagnose and diffuse one significant source of concern about appeals to engineering, namely that they are inherently and problematically metaphorical. We suggest that there is plenty of fertile ground left for a continued, healthy relationship between engineering and biology. PMID:26085824

  5. Engineering of a complex bone tissue model with endothelialised channels and capillary-like networks.

    PubMed

    Klotz, B J; Lim, K S; Chang, Y X; Soliman, B G; Pennings, I; Melchels, F P W; Woodfield, T B F; Rosenberg, A J; Malda, J; Gawlitta, D

    2018-05-30

    In engineering of tissue analogues, upscaling to clinically-relevant sized constructs remains a significant challenge. The successful integration of a vascular network throughout the engineered tissue is anticipated to overcome the lack of nutrient and oxygen supply to residing cells. This work aimed at developing a multiscale bone-tissue-specific vascularisation strategy. Engineering pre-vascularised bone leads to biological and fabrication dilemmas. To fabricate channels endowed with an endothelium and suitable for osteogenesis, rather stiff materials are preferable, while capillarisation requires soft matrices. To overcome this challenge, gelatine-methacryloyl hydrogels were tailored by changing the degree of functionalisation to allow for cell spreading within the hydrogel, while still enabling endothelialisation on the hydrogel surface. An additional challenge was the combination of the multiple required cell-types within one biomaterial, sharing the same culture medium. Consequently, a new medium composition was investigated that simultaneously allowed for endothelialisation, capillarisation and osteogenesis. Integrated multipotent mesenchymal stromal cells, which give rise to pericyte-like and osteogenic cells, and endothelial-colony-forming cells (ECFCs) which form capillaries and endothelium, were used. Based on the aforementioned optimisation, a construct of 8 × 8 × 3 mm, with a central channel of 600 µm in diameter, was engineered. In this construct, ECFCs covered the channel with endothelium and osteogenic cells resided in the hydrogel, adjacent to self-assembled capillary-like networks. This study showed the promise of engineering complex tissue constructs by means of human primary cells, paving the way for scaling-up and finally overcoming the challenge of engineering vascularised tissues.

  6. Multivariable speed synchronisation for a parallel hybrid electric vehicle drivetrain

    NASA Astrophysics Data System (ADS)

    Alt, B.; Antritter, F.; Svaricek, F.; Schultalbers, M.

    2013-03-01

    In this article, a new drivetrain configuration of a parallel hybrid electric vehicle is considered and a novel model-based control design strategy is given. In particular, the control design covers the speed synchronisation task during a restart of the internal combustion engine. The proposed multivariable synchronisation strategy is based on feedforward and decoupled feedback controllers. The performance and the robustness properties of the closed-loop system are illustrated by nonlinear simulation results.

  7. Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor

    DTIC Science & Technology

    2010-05-01

    Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal

  8. PISCES 2 users manual

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.

    1987-01-01

    PISCES 2 is a programming environment and set of extensions to Fortran 77 for parallel programming. It is intended to provide a basis for writing programs for scientific and engineering applications on parallel computers in a way that is relatively independent of the particular details of the underlying computer architecture. This user's manual provides a complete description of the PISCES 2 system as it is currently implemented on the 20 processor Flexible FLEX/32 at NASA Langley Research Center.

  9. Aircraft Engine Systems

    NASA Technical Reports Server (NTRS)

    Veres, Joseph

    2001-01-01

    This report outlines the detailed simulation of Aircraft Turbofan Engine. The objectives were to develop a detailed flow model of a full turbofan engine that runs on parallel workstation clusters overnight and to develop an integrated system of codes for combustor design and analysis to enable significant reduction in design time and cost. The model will initially simulate the 3-D flow in the primary flow path including the flow and chemistry in the combustor, and ultimately result in a multidisciplinary model of the engine. The overnight 3-D simulation capability of the primary flow path in a complete engine will enable significant reduction in the design and development time of gas turbine engines. In addition, the NPSS (Numerical Propulsion System Simulation) multidisciplinary integration and analysis are discussed.

  10. STS-26 Discovery, OV-103, SSME (2019) installed in position number one at KSC

    NASA Image and Video Library

    1988-01-10

    S88-29076 (10 Jan 1988) --- KSC employees work together to carefully guide a 7,000 pound main engine into the number one position in Discovery's aft compartment. Because of the engine's weight and size, special handling equipment is needed to perform the installation. Discovery is currently being prepared for the upcoming STS-26 mission in bay 1 of the Orbiter Processing Facility. This engine, 2019, arrived at KSC on Jan. 6 and was installed Jan. 10. The other two engines are scheduled to be installed later this month. The shuttle's three main liquid fueled engines provide the main propulsion for the orbiter vehicle. The cluster of three engines operate in parallel with the solid rocket boosters during the initial ascent.

  11. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  12. ICASE Computer Science Program

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Institute for Computer Applications in Science and Engineering computer science program is discussed in outline form. Information is given on such topics as problem decomposition, algorithm development, programming languages, and parallel architectures.

  13. Parallelization of Rocket Engine System Software (Press)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1996-01-01

    The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.

  14. Economic impact of optimising antiretroviral treatment in human immunodeficiency virus-infected adults with suppressed viral load in Spain, by implementing the grade A-1 evidence recommendations of the 2015 GESIDA/National AIDS Plan.

    PubMed

    Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz

    2018-03-01

    The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  15. Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area

    NASA Astrophysics Data System (ADS)

    Khare, Vikas; Nema, Savita; Baredar, Prashant

    2017-04-01

    This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.

  16. Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar

    NASA Astrophysics Data System (ADS)

    Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd

    2017-05-01

    Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.

  17. Artemis: Integrating Scientific Data on the Grid (Preprint)

    DTIC Science & Technology

    2004-07-01

    Theseus execution engine [Barish and Knoblock 03] to efficiently execute the generated datalog program. The Theseus execution engine has a wide...variety of operations to query databases, web sources, and web services. Theseus also contains a wide variety of relational operations, such as...selection, union, or projection. Furthermore, Theseus optimizes the execution of an integration plan by querying several data sources in parallel and

  18. On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Gonzalo, J.; Domínguez, D.; López, D.

    2014-12-01

    From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.

  19. Multiobjective optimisation of bogie suspension to boost speed on curves

    NASA Astrophysics Data System (ADS)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  20. Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy

    NASA Astrophysics Data System (ADS)

    Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.

    2017-08-01

    We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95  <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase  <200 ms and for changes in the breathing period of  <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.

  1. Tensor contraction engine: Abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, So

    2003-11-20

    We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes commonmore » binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory [MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ).« less

  2. Mutual information-based LPI optimisation for radar network

    NASA Astrophysics Data System (ADS)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  3. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  4. Shape optimisation of an underwater Bernoulli gripper

    NASA Astrophysics Data System (ADS)

    Flint, Tim; Sellier, Mathieu

    2015-11-01

    In this work, we are interested in maximising the suction produced by an underwater Bernoulli gripper. Bernoulli grippers work by exploiting low pressure regions caused by the acceleration of a working fluid through a narrow channel, between the gripper and a surface, to provide a suction force. This mechanism allows for non-contact adhesion to various surfaces and may be used to hold a robot to the hull of a ship while it inspects welds for example. A Bernoulli type pressure analysis was used to model the system with a Darcy friction factor approximation to include the effects of frictional losses. The analysis involved a constrained optimisation in order to avoid cavitation within the mechanism which would result in decreased performance and damage to surfaces. A sensitivity based method and gradient descent approach was used to find the optimum shape of a discretised surface. The model's accuracy has been quantified against finite volume computational fluid dynamics simulation (ANSYS CFX) using the k- ω SST turbulence model. Preliminary results indicate significant improvement in suction force when compared to a simple geometry by retaining a pressure just above that at which cavitation would occur over as much surface area as possible. Doctoral candidate in the Mechanical Engineering Department of the University of Canterbury, New Zealand.

  5. Minimum Colour Differences Required To Recognise Small Objects On A Colour CRT

    NASA Astrophysics Data System (ADS)

    Phillips, Peter L.

    1985-05-01

    Data is required to assist in the assessment, evaluation and optimisation of colour and other displays for both military and general use. A general aim is to develop a mathematical technique to aid optimisation and reduce the amount of expensive hardware development and trials necessary when introducing new displays. The present standards and methods available for evaluating colour differences are known not to apply to the perception of typical objects on a display. Data is required for irregular objects viewed at small angular subtense ((1°) and relating the recognition of form rather than colour matching. Therefore laboratory experiments have been carried out using a computer controlled CRT to measure the threshold colour difference that an observer requires between object and background so that he can discriminate a variety of similar objects. Measurements are included for a variety of background and object colourings. The results are presented in the CIE colorimetric system similar to current standards used by the display engineer. Apart from the characteristic small field tritanopia, the results show that larger colour differences are required for object recognition than those assumed from conventional colour discrimination data. A simple relationship to account for object size and background colour is suggested to aid visual performance assessments and modelling.

  6. Development of Porous Piezoceramics for Medical and Sensor Applications.

    PubMed

    Ringgaard, Erling; Lautzenhiser, Frans; Bierregaard, Louise M; Zawada, Tomasz; Molz, Eric

    2015-12-21

    The use of porosity to modify the functional properties of piezoelectric ceramics is well known in the scientific literature as well as by the industry, and porous ceramic can be seen as a 2-phase composite. In the present work, examples are given of applications where controlled porosity is exploited in order to optimise the dielectric, piezoelectric and acoustic properties of the piezoceramics. For the optimisation efforts it is important to note that the thickness coupling coefficient k t will be maximised for some non-zero value of the porosity that could be above 20%. On the other hand, with a good approximation, the acoustic velocity decreases linearly with increasing porosity, which is obviously also the case for the density. Consequently, the acoustic impedance shows a rather strong decrease with porosity, and in practice a reduction of more than 50% may be obtained for an engineered porous ceramic. The significance of the acoustic impedance is associated with the transmission of acoustic signals through the interface between the piezoceramic and some medium of propagation, but when the porous ceramic is used as a substrate for a piezoceramic thick film, the attenuation may be equally important. In the case of open porosity it is possible to introduce a liquid into the pores, and examples of modifying the properties in this way are given.

  7. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  8. Advances in Parallel Computing and Databases for Digital Pathology in Cancer Research

    DTIC Science & Technology

    2016-11-13

    these technologies and how we have used them in the past. We are interested in learning more about the needs of clinical pathologists as we continue to...such as image processing and correlation. Further, High Performance Computing (HPC) paradigms such as the Message Passing Interface (MPI) have been...Defense for Research and Engineering. such as pMatlab [4], or bcMPI [5] can significantly reduce the need for deep knowledge of parallel computing. In

  9. Parallel Visualization Co-Processing of Overnight CFD Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Edwards, David E.; Haimes, Robert

    1999-01-01

    An interactive visualization system pV3 is being developed for the investigation of advanced computational methodologies employing visualization and parallel processing for the extraction of information contained in large-scale transient engineering simulations. Visual techniques for extracting information from the data in terms of cutting planes, iso-surfaces, particle tracing and vector fields are included in this system. This paper discusses improvements to the pV3 system developed under NASA's Affordable High Performance Computing project.

  10. Impact of new computing systems on computational mechanics and flight-vehicle structures technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.

    1984-01-01

    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  11. Multicore: Fallout from a Computing Evolution

    ScienceCinema

    Yelick, Kathy [Director, NERSC

    2017-12-09

    July 22, 2008 Berkeley Lab lecture: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.

  12. Exploratory Experiments in the Tribological Behavior of Engineering Surfaces with Nano-Coating Using a Tribo-Rheometer

    DTIC Science & Technology

    2008-05-30

    Tribological behavior and graphitization of carbon nanotubes grown on 440C stainless steel . Tribo. Lett., 19(2):119-125, 2005. D-2 ...with a stainless steel parallel plate configuration as shown in figure 1. Due to the radial variation of the local shear stress T in the parallel...using a force transducer that is mounted below the surface. B-1 Exploded View Stainless Steel Plate Lower Fixture Microscale View Figure 1:

  13. Parallel block schemes for large scale least squares computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less

  14. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  15. Stochastic Stirling Engine Operating in Contact with Active Baths

    NASA Astrophysics Data System (ADS)

    Zakine, Ruben; Solon, Alexandre; Gingrich, Todd; van Wijland, Frédéric

    2017-04-01

    A Stirling engine made of a colloidal particle in contact with a nonequilibrium bath is considered and analyzed with the tools of stochastic energetics. We model the bath by non Gaussian persistent noise acting on the colloidal particle. Depending on the chosen definition of an isothermal transformation in this nonequilibrium setting, we find that either the energetics of the engine parallels that of its equilibrium counterpart or, in the simplest case, that it ends up being less efficient. Persistence, more than non Gaussian effects, are responsible for this result.

  16. Users manual for program ADMIT: Admittance and pressure transfer function developed for use on a PC computer

    NASA Technical Reports Server (NTRS)

    Armstrong, Wilbur C.

    1992-01-01

    The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the ADMIT code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines.

  17. Users manual for program SSFREQ intermediate mode stability curves: Developed for use on a PC computer

    NASA Technical Reports Server (NTRS)

    Armstrong, Wilbur C.

    1992-01-01

    The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the SSFREQ code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, the splitting of a pipe into equal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines.

  18. An investigation of the effect of instruction in physics on the formation of mental models for problem-solving in the context of simple electric circuits

    NASA Astrophysics Data System (ADS)

    Beh, Kian Lim

    2000-10-01

    This study was designed to explore the effect of a typical traditional method of instruction in physics on the formation of useful mental models among college students for problem-solving using simple electric circuits as a context. The study was also aimed at providing a comprehensive description of the understanding regarding electric circuits among novices and experts. In order to achieve these objectives, the following two research approaches were employed: (1) A students survey to collect data from 268 physics students; and (2) An interview protocol to collect data from 23 physics students and 24 experts (including 10 electrical engineering graduates, 4 practicing electrical engineers, 2 secondary school physics teachers, 8 physics lecturers, and 4 electrical engineers). Among the major findings are: (1) Most students do not possess accurate models of simple electric circuits as presented implicitly in physics textbooks; (2) Most students display good procedural understanding for solving simple problems concerning electric circuits but have no in-depth conceptual understanding in terms of practical knowledge of current, voltage, resistance, and circuit connections; (3) Most students encounter difficulty in discerning parallel connections that are drawn in a non-conventional format; (4) After a year of college physics, students show significant improvement in areas, including practical knowledge of current and voltage, ability to compute effective resistance and capacitance, ability to identify circuit connections, and ability to solve problems; however, no significance was found in practical knowledge of resistance and ability to connect circuits; and (5) The differences and similarities between the physics students and the experts include: (a) Novices perceive parallel circuits more in terms of 'branch', 'current', and 'resistors with the same resistance' while experts perceive parallel circuits more in terms of 'node', 'voltage', and 'less resistance'; and (b) Both novices and experts use phrases such as 'side-by side' and 'one on top of the other' in describing parallel circuits which emphasize the geometry of the standard circuit drawing when describing parallel resistors.

  19. Performance optimization of an online retailer by a unique online resilience engineering algorithm

    NASA Astrophysics Data System (ADS)

    Azadeh, A.; Salehi, V.; Salehi, R.; Hassani, S. M.

    2018-03-01

    Online shopping has become more attractive and competitive in electronic markets. Resilience engineering (RE) can help such systems divert to the normal state in case of encountering unexpected events. This study presents a unique online resilience engineering (ORE) approach for online shopping systems and customer service performance. Moreover, this study presents a new ORE algorithm for the performance optimisation of an actual online shopping system. The data are collected by standard questionnaires from both expert employees and customers. The problem is then formulated mathematically using data envelopment analysis (DEA). The results show that the design process which is based on ORE is more efficient than the conventional design approach. Moreover, on-time delivery is the most important factor from the personnel's perspective. In addition, according to customers' view, trust, security and good quality assurance are the most effective factors during transactions. This is the first study that introduces ORE for electronic markets. Second, it investigates impact of RE on online shopping through DEA and statistical methods. Third, a practical approach is employed in this study and it may be used for similar online shops. Fourth, the results are verified and validated through complete sensitivity analysis.

  20. 46 CFR 111.05-13 - Grounding connection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Equipment Ground, Ground Detection, and Grounded Systems § 111.05-13 Grounding... power sources operating in parallel in the system. ...

  1. Rankine cycle waste heat recovery system

    DOEpatents

    Ernst, Timothy C.; Nelson, Christopher R.

    2015-09-22

    A waste heat recovery (WHR) system connects a working fluid to fluid passages formed in an engine block and/or a cylinder head of an internal combustion engine, forming an engine heat exchanger. The fluid passages are formed near high temperature areas of the engine, subjecting the working fluid to sufficient heat energy to vaporize the working fluid while the working fluid advantageously cools the engine block and/or cylinder head, improving fuel efficiency. The location of the engine heat exchanger downstream from an EGR boiler and upstream from an exhaust heat exchanger provides an optimal position of the engine heat exchanger with respect to the thermodynamic cycle of the WHR system, giving priority to cooling of EGR gas. The configuration of valves in the WHR system provides the ability to select a plurality of parallel flow paths for optimal operation.

  2. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    NASA Astrophysics Data System (ADS)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  3. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  4. Diagnostics of wear in aeronautical systems

    NASA Technical Reports Server (NTRS)

    Wedeven, L. D.

    1979-01-01

    The use of appropriate diagnostic tools for aircraft oil wetted components is reviewed, noting that it can reduce direct operating costs through reduced unscheduled maintenance, particularly in helicopter engine and transmission systems where bearing failures are a significant cost factor. Engine and transmission wear modes are described, and diagnostic methods for oil and wet particle analysis, the spectrometric oil analysis program, chip detectors, ferrography, in-line oil monitor and radioactive isotope tagging are discussed, noting that they are effective over a limited range of particle sizes but compliment each other if used in parallel. Fine filtration can potentially increase time between overhauls, but reduces the effectiveness of conventional oil monitoring techniques so that alternative diagnostic techniques must be used. It is concluded that the development of a diagnostic system should be parallel and integral with the development of a mechanical system.

  5. Engineering rules for evaluating the efficiency of multiplexing traffic streams

    NASA Astrophysics Data System (ADS)

    Klincewicz, John G.

    2004-09-01

    It is common, either for a telecommunications service provider or for a corporate enterprise, to have multiple data networks. For example, both an IP network and an ATM or Frame Relay network could be in operation to serve different applications. This can result in parallel transport links between the same two locations, each carrying data traffic under a different protocol. In this paper, we consider some practical engineering rules, for particular situations, to evaluate whether or not it is advantageous to combine these parallel traffic streams onto a single transport link. Combining the streams requires additional overhead (a so-called "cell tax" ) but, in at least some situations, can result in more efficient use of modular transport capacity. Simple graphs can be used to summarize the analysis. Some interesting, and perhaps unexpected, observations can be made.

  6. V/STOL Tandem Fan transition section model test. [in the Lewis Research Center 10-by-10 foot wind tunnel

    NASA Technical Reports Server (NTRS)

    Simpkin, W. E.

    1982-01-01

    An approximately 0.25 scale model of the transition section of a tandem fan variable cycle engine nacelle was tested in the NASA Lewis Research Center 10-by-10 foot wind tunnel. Two 12-inch, tip-turbine driven fans were used to simulate a tandem fan engine. Three testing modes simulated a V/STOL tandem fan airplane. Parallel mode has two separate propulsion streams for maximum low speed performance. A front inlet, fan, and downward vectorable nozzle forms one stream. An auxilliary top inlet provides air to the aft fan - supplying the core engine and aft vectorable nozzle. Front nozzle and top inlet closure, and removal of a blocker door separating the two streams configures the tandem fan for series mode operations as a typical aircraft propulsion system. Transition mode operation is formed by intermediate settings of the front nozzle, blocker door, and top inlet. Emphasis was on the total pressure recovery and flow distortion at the aft fan face. A range of fan flow rates were tested at tunnel airspeeds from 0 to 240 knots, and angles-of-attack from -10 to 40 deg for all three modes. In addition to the model variables for the three modes, model variants of the top inlet were tested in the parallel mode only. These lip variables were: aft lip boundary layer bleed holes, and Three position turning vane. Also a bellmouth extension of the top inlet side lips was tested in parallel mode.

  7. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  8. Electromagnetic simulators for Ground Penetrating Radar applications developed in COST Action TU1208

    NASA Astrophysics Data System (ADS)

    Pajewski, Lara; Giannopoulos, Antonios; Warren, Craig; Antonijevic, Sinisa; Doric, Vicko; Poljak, Dragan

    2017-04-01

    Founded in 1971, COST (European COoperation in Science and Technology) is the first and widest European framework for the transnational coordination of research activities. It operates through Actions, science and technology networks with a duration of four years. The main objective of the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (4 April 2013 - 3 October 2017) is to exchange and increase knowledge and experience on Ground-Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe a wider use of this technique. Research activities carried out in TU1208 include all aspects of the GPR technology and methodology: design, realization and testing of radar systems and antennas; development and testing of surveying procedures for the monitoring and inspection of structures; integration of GPR with other non-destructive testing approaches; advancement of electromagnetic-modelling, inversion and data-processing techniques for radargram analysis and interpretation. GPR radargrams often have no resemblance to the subsurface or structures over which the profiles were recorded. Various factors, including the innate design of the survey equipment and the complexity of electromagnetic propagation in composite scenarios, can disguise complex structures recorded on reflection profiles. Electromagnetic simulators can help to understand how target structures get translated into radargrams. They can show the limitations of GPR technique, highlight its capabilities, and support the user in understanding where and in what environment GPR can be effectively used. Furthermore, electromagnetic modelling can aid the choice of the most proper GPR equipment for a survey, facilitate the interpretation of complex datasets and be used for the design of new antennas. Electromagnetic simulators can be employed to produce synthetic radargrams with the purposes of testing new data-processing, imaging and inversion algorithms, or assess the effectiveness of existing ones. A fast and accurate forward solver can also be used as part of an inverse solver. This contribution aims at presenting two electromagnetic simulators based on the Finite-Difference Time Domain (FDTD) technique and Boundary Element Method (BEM), for Ground Penetrating Radar applications. These tools have been developed by Members of the COST Action TU1208. The first simulator is the new open-source version of the software gprMax (www.GPRadar.eu), which employs Yee's algorithm to solve Maxwell's equations by using the FDTD method and includes advanced features allowing the accurate analysis of realistic scenarios. For example, a library of antennas is available and these can be directly included in the models. Moreover, it is possible to build heterogeneous media using fractals, as well as objects with rough surfaces. Anisotropic media can be defined and this allows materials such as wood and fibre-reinforced concrete to be accurately modelled. Media with arbitrary frequency-dispersive properties can be also defined and this paves the way to the use of gprMax in new areas, such as the modelling of human tissues. Optimisation of parameters based on Taguchi's method can be performed: this feature can be useful to optimise material properties based on experimental data, or to design new antennas. Additionally, a freeware and very useful CAD package was developed, conceived to ease the use of gprMax: such tool assists in the creation, modification and analysis of two-dimensional gprMax models and can also be used to plot results. The second simulator is TWiNS-II: this is free software for the analysis of multiple thin wires in the presence of two media, implementing the Galerkin-Bubnov Indirect BEM; calculations can be undertaken in the frequency or time domain. The time-domain code is focused on the assessment of current distributions along thin wire structures. The configuration that can be analyzed is a set of parallel thin wires placed in free space above a perfect ground, or above a dielectric lossless half-space. The wire array resides in a plane parallel to the interface. Within this basic geometry, the user is allowed to arbitrarily change the number, size and position of wires, their excitation characteristics and the dielectric constant of the half-space. The frequency-domain code can be used for the frequency analysis of the same wire configuration as in the time domain counterpart. In addition, the effects of losses in the ground can be taken into account. Acknowledgement: The Authors are deeply grateful to COST (European Cooperation in Science and Technology, www.cost.eu), for funding and supporting the COST Action TU1208 "Civil engineering applications of Ground Penetrating Radar" (www.GPRadar.eu).

  9. A Comparative Propulsion System Analysis for the High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Haller, William J.; Senick, Paul F.; Jones, Scott M.; Seidel, Jonathan A.

    2005-01-01

    Six of the candidate propulsion systems for the High-Speed Civil Transport are the turbojet, turbine bypass engine, mixed flow turbofan, variable cycle engine, Flade engine, and the inverting flow valve engine. A comparison of these propulsion systems by NASA's Glenn Research Center, paralleling studies within the aircraft industry, is presented. This report describes the Glenn Aeropropulsion Analysis Office's contribution to the High-Speed Research Program's 1993 and 1994 propulsion system selections. A parametric investigation of each propulsion cycle's primary design variables is analytically performed. Performance, weight, and geometric data are calculated for each engine. The resulting engines are then evaluated on two airframer-derived supersonic commercial aircraft for a 5000 nautical mile, Mach 2.4 cruise design mission. The effects of takeoff noise, cruise emissions, and cycle design rules are examined.

  10. Exploring Fuel-Saving Potential of Long-Haul Truck Hybridization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zhiming; LaClair, Tim J.; Smith, David E.

    We report our comparisons on the simulated fuel economy for parallel, series, and dual-mode hybrid electric long-haul trucks, in addition to a conventional powertrain configuration, powered by a commercial 2010-compliant 15-L diesel engine over a freeway-dominated heavy-duty truck driving cycle. The driving cycle was obtained by measurement during normal driving conditions. The results indicated that both parallel and dual-mode hybrid powertrains were capable of improving fuel economy by 7% to 8%. But there was no significant fuel economy benefit for the series hybrid truck because of internal inefficiencies in energy exchange. When reduced aerodynamic drag and tire rolling resistance weremore » combined with hybridization, there was a synergistic fuel economy benefit for appropriate hybrids that increased the fuel economy benefit to more than 15%. Long-haul hybrid trucks with reduced aerodynamic drag and rolling resistance offered lower peak engine loads, better kinetic energy recovery, and reduced average engine power demand. Therefore, it is expected that hybridization with load reduction technologies offers important potential fuel energy savings for future long-haul trucks.« less

  11. Exploring Fuel-Saving Potential of Long-Haul Truck Hybridization

    DOE PAGES

    Gao, Zhiming; LaClair, Tim J.; Smith, David E.; ...

    2015-10-01

    We report our comparisons on the simulated fuel economy for parallel, series, and dual-mode hybrid electric long-haul trucks, in addition to a conventional powertrain configuration, powered by a commercial 2010-compliant 15-L diesel engine over a freeway-dominated heavy-duty truck driving cycle. The driving cycle was obtained by measurement during normal driving conditions. The results indicated that both parallel and dual-mode hybrid powertrains were capable of improving fuel economy by 7% to 8%. But there was no significant fuel economy benefit for the series hybrid truck because of internal inefficiencies in energy exchange. When reduced aerodynamic drag and tire rolling resistance weremore » combined with hybridization, there was a synergistic fuel economy benefit for appropriate hybrids that increased the fuel economy benefit to more than 15%. Long-haul hybrid trucks with reduced aerodynamic drag and rolling resistance offered lower peak engine loads, better kinetic energy recovery, and reduced average engine power demand. Therefore, it is expected that hybridization with load reduction technologies offers important potential fuel energy savings for future long-haul trucks.« less

  12. On the design and optimisation of new fractal antenna using PSO

    NASA Astrophysics Data System (ADS)

    Rani, Shweta; Singh, A. P.

    2013-10-01

    An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.

  13. A study of the relationship between learning styles and cognitive abilities in engineering students

    NASA Astrophysics Data System (ADS)

    Hames, E.; Baker, M.

    2015-03-01

    Learning preferences have been indirectly linked to student success in engineering programmes, without a significant body of research to connect learning preferences with cognitive abilities. A better understanding of the relationship between learning styles and cognitive abilities will allow educators to optimise the classroom experience for students. The goal of this study was to determine whether relationships exist between student learning styles, as determined by the Felder-Soloman Inventory of Learning Styles (FSILS), and their cognitive performance. Three tests were used to assess student's cognitive abilities: a matrix reasoning task, a Tower of London task, and a mental rotation task. Statistical t-tests and correlation coefficients were used to quantify the results. Results indicated that the global-sequential, active-referential, and visual-verbal FSILS learning styles scales are related to performance on cognitive tasks. Most of these relationships were found in response times, not accuracy. Differences in task performance between gender groups (male and female) were more notable than differences between learning styles groups.

  14. Next generation bone tissue engineering: non-viral miR-133a inhibition using collagen-nanohydroxyapatite scaffolds rapidly enhances osteogenesis

    NASA Astrophysics Data System (ADS)

    Mencía Castaño, Irene; Curtin, Caroline M.; Duffy, Garry P.; O'Brien, Fergal J.

    2016-06-01

    Bone grafts are the second most transplanted materials worldwide at a global cost to healthcare systems valued over $30 billion every year. The influence of microRNAs in the regenerative capacity of stem cells offers vast therapeutic potential towards bone grafting; however their efficient delivery to the target site remains a major challenge. This study describes how the functionalisation of porous collagen-nanohydroxyapatite (nHA) scaffolds with miR-133a inhibiting complexes, delivered using non-viral nHA particles, enhanced human mesenchymal stem cell-mediated osteogenesis through the novel focus on a key activator of osteogenesis, Runx2. This study showed enhanced Runx2 and osteocalcin expression, as well as increased alkaline phosphatase activity and calcium deposition, thus demonstrating a further enhanced therapeutic potential of a biomaterial previously optimised for bone repair applications. The promising features of this platform offer potential for a myriad of applications beyond bone repair and tissue engineering, thus presenting a new paradigm for microRNA-based therapeutics.

  15. Ice-sheet modelling accelerated by graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek

    2014-11-01

    Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.

  16. In situ click chemistry: a powerful means for lead discovery.

    PubMed

    Sharpless, K Barry; Manetsch, Roman

    2006-11-01

    Combinatorial chemistry and parallel synthesis are important and regularly applied tools for lead identification and optimisation, although they are often accompanied by challenges related to the efficiency of library synthesis and the purity of the compound library. In the last decade, novel means of lead discovery approaches have been investigated where the biological target is actively involved in the synthesis of its own inhibitory compound. These fragment-based approaches, also termed target-guided synthesis (TGS), show great promise in lead discovery applications by combining the synthesis and screening of libraries of low molecular weight compounds in a single step. Of all the TGS methods, the kinetically controlled variant is the least well known, but it has the potential to emerge as a reliable lead discovery method. The kinetically controlled TGS approach, termed in situ click chemistry, is discussed in this article.

  17. Using Network Dynamical Influence to Drive Consensus

    NASA Astrophysics Data System (ADS)

    Punzo, Giuliano; Young, George F.; MacDonald, Malcolm; Leonard, Naomi E.

    2016-05-01

    Consensus and decision-making are often analysed in the context of networks, with many studies focusing attention on ranking the nodes of a network depending on their relative importance to information routing. Dynamical influence ranks the nodes with respect to their ability to influence the evolution of the associated network dynamical system. In this study it is shown that dynamical influence not only ranks the nodes, but also provides a naturally optimised distribution of effort to steer a network from one state to another. An example is provided where the “steering” refers to the physical change in velocity of self-propelled agents interacting through a network. Distinct from other works on this subject, this study looks at directed and hence more general graphs. The findings are presented with a theoretical angle, without targeting particular applications or networked systems; however, the framework and results offer parallels with biological flocks and swarms and opportunities for design of technological networks.

  18. Biological removal of NOx from flue gas.

    PubMed

    Kumaraswamy, R; Muyzer, G; Kuenen, J G; Loosdrecht, M C M

    2004-01-01

    BioDeNOx is a novel integrated physico-chemical and biological process for the removal of nitrogen oxides (NOx) from flue gas. Due to the high temperature of flue gas the process is performed at a temperature between 50-55 degrees C. Flue gas containing CO2, O2, SO2 and NOx, is purged through Fe(II)EDTA2- containing liquid. The Fe(II)EDTA2- complex effectively binds the NOx; the bound NOx is converted into N2 in a complex reaction sequence. In this paper an overview of the potential microbial reactions in the BioDeNOx process is discussed. It is evident that though the process looks simple, due to the large number of parallel potential reactions and serial microbial conversions, it is much more complex. There is a need for a detailed investigation in order to properly understand and optimise the process.

  19. Tradeoffs Between Synchronization, Communication, and Work in Parallel Linear Algebra Computations

    DTIC Science & Technology

    2014-01-25

    Demmel Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2014- 8 http...www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014- 8 .html January 25, 2014 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...University of California at Berkeley,Electrical Engineering and Computer Sciences,Berkeley,CA,94720 8 . PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING

  20. Identification and interpretation of patterns in rocket engine data: Artificial intelligence and neural network approaches

    NASA Technical Reports Server (NTRS)

    Ali, Moonis; Whitehead, Bruce; Gupta, Uday K.; Ferber, Harry

    1989-01-01

    This paper describes an expert system which is designed to perform automatic data analysis, identify anomalous events, and determine the characteristic features of these events. We have employed both artificial intelligence and neural net approaches in the design of this expert system. The artificial intelligence approach is useful because it provides (1) the use of human experts' knowledge of sensor behavior and faulty engine conditions in interpreting data; (2) the use of engine design knowledge and physical sensor locations in establishing relationships among the events of multiple sensors; (3) the use of stored analysis of past data of faulty engine conditions; and (4) the use of knowledge-based reasoning in distinguishing sensor failure from actual faults. The neural network approach appears promising because neural nets (1) can be trained on extremely noisy data and produce classifications which are more robust under noisy conditions than other classification techniques; (2) avoid the necessity of noise removal by digital filtering and therefore avoid the need to make assumptions about frequency bands or other signal characteristics of anomalous behavior; (3) can, in effect, generate their own feature detectors based on the characteristics of the sensor data used in training; and (4) are inherently parallel and therefore are potentially implementable in special-purpose parallel hardware.

  1. An integrated runtime and compile-time approach for parallelizing structured and block structured applications

    NASA Technical Reports Server (NTRS)

    Agrawal, Gagan; Sussman, Alan; Saltz, Joel

    1993-01-01

    Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.

  2. Simulating Effects of High Angle of Attack on Turbofan Engine Performance

    NASA Technical Reports Server (NTRS)

    Liu, Yuan; Claus, Russell W.; Litt, Jonathan S.; Guo, Ten-Huei

    2013-01-01

    A method of investigating the effects of high angle of attack (AOA) flight on turbofan engine performance is presented. The methodology involves combining a suite of diverse simulation tools. Three-dimensional, steady-state computational fluid dynamics (CFD) software is used to model the change in performance of a commercial aircraft-type inlet and fan geometry due to various levels of AOA. Parallel compressor theory is then applied to assimilate the CFD data with a zero-dimensional, nonlinear, dynamic turbofan engine model. The combined model shows that high AOA operation degrades fan performance and, thus, negatively impacts compressor stability margins and engine thrust. In addition, the engine response to high AOA conditions is shown to be highly dependent upon the type of control system employed.

  3. Dispersion of turbojet engine exhaust in flight

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1973-01-01

    The dispersion of the exhaust of turbojet engines into the atmosphere is estimated by using a model developed for the mixing of a round jet with a parallel flow. The analysis is appropriate for determining the spread and dilution of the jet exhaust from the engine exit until it is entrained in the aircraft trailing vortices. Chemical reactions are not expected to be important and are not included in the flow model. Calculations of the dispersion of the exhaust plumes of three aircraft turbojet engines with and without afterburning at typical flight conditions are presented. Calculated average concentrations for the exhaust plume from a single engine jet fighter are shown to be in good agreement with measurements made in the aircraft wake during flight.

  4. Effectiveness of an implementation optimisation intervention aimed at increasing parent engagement in HENRY, a childhood obesity prevention programme - the Optimising Family Engagement in HENRY (OFTEN) trial: study protocol for a randomised controlled trial.

    PubMed

    Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia

    2017-01-24

    Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.

  5. Radiation dose optimisation for conventional imaging in infants and newborns using automatic dose management software: an application of the new 2013/59 EURATOM directive.

    PubMed

    Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E

    2018-04-09

    Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.

  6. Parallel-Vector Algorithm For Rapid Structural Anlysis

    NASA Technical Reports Server (NTRS)

    Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.

    1993-01-01

    New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.

  7. Multicore: Fallout From a Computing Evolution (LBNL Summer Lecture Series)

    ScienceCinema

    Yelick, Kathy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)

    2018-05-07

    Summer Lecture Series 2008: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.

  8. Incompressible SPH (ISPH) with fast Poisson solver on a GPU

    NASA Astrophysics Data System (ADS)

    Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.

    2018-05-01

    This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.

  9. The social essentials of learning: an experimental investigation of collaborative problem solving and knowledge construction in mathematics classrooms in Australia and China

    NASA Astrophysics Data System (ADS)

    Chan, Man Ching Esther; Clarke, David; Cao, Yiming

    2018-03-01

    Interactive problem solving and learning are priorities in contemporary education, but these complex processes have proved difficult to research. This project addresses the question "How do we optimise social interaction for the promotion of learning in a mathematics classroom?" Employing the logic of multi-theoretic research design, this project uses the newly built Science of Learning Research Classroom (ARC-SR120300015) at The University of Melbourne and equivalent facilities in China to investigate classroom learning and social interactions, focusing on collaborative small group problem solving as a way to make the social aspects of learning visible. In Australia and China, intact classes of local year 7 students with their usual teacher will be brought into the research classroom facilities with built-in video cameras and audio recording equipment to participate in purposefully designed activities in mathematics. The students will undertake a sequence of tasks in the social units of individual, pair, small group (typically four students) and whole class. The conditions for student collaborative problem solving and learning will be manipulated so that student and teacher contributions to that learning process can be distinguished. Parallel and comparative analyses will identify culture-specific interactive patterns and provide the basis for hypotheses about the learning characteristics underlying collaborative problem solving performance documented in the research classrooms in each country. The ultimate goals of the project are to generate, develop and test more sophisticated hypotheses for the optimisation of social interaction in the mathematics classroom in the interest of improving learning and, particularly, student collaborative problem solving.

  10. Parallel Electrochemical Treatment System and Application for Identifying Acid-Stable Oxygen Evolution Electrocatalysts

    DOE PAGES

    Jones, Ryan J. R.; Shinde, Aniketa; Guevarra, Dan; ...

    2015-01-05

    There are many energy technologies require electrochemical stability or preactivation of functional materials. Due to the long experiment duration required for either electrochemical preactivation or evaluation of operational stability, parallel screening is required to enable high throughput experimentation. We found that imposing operational electrochemical conditions to a library of materials in parallel creates several opportunities for experimental artifacts. We discuss the electrochemical engineering principles and operational parameters that mitigate artifacts int he parallel electrochemical treatment system. We also demonstrate the effects of resistive losses within the planar working electrode through a combination of finite element modeling and illustrative experiments. Operationmore » of the parallel-plate, membrane-separated electrochemical treatment system is demonstrated by exposing a composition library of mixed metal oxides to oxygen evolution conditions in 1M sulfuric acid for 2h. This application is particularly important because the electrolysis and photoelectrolysis of water are promising future energy technologies inhibited by the lack of highly active, acid-stable catalysts containing only earth abundant elements.« less

  11. Conical Refraction of Elastic Waves by Anisotropic Metamaterials and Application for Parallel Translation of Elastic Waves.

    PubMed

    Ahn, Young Kwan; Lee, Hyung Jin; Kim, Yoon Young

    2017-08-30

    Conical refraction, which is quite well-known in electromagnetic waves, has not been explored well in elastic waves due to the lack of proper natural elastic media. Here, we propose and design a unique anisotropic elastic metamaterial slab that realizes conical refraction for horizontally incident longitudinal or transverse waves; the single-mode wave is split into two oblique coupled longitudinal-shear waves. As an interesting application, we carried out an experiment of parallel translation of an incident elastic wave system through the anisotropic metamaterial slab. The parallel translation can be useful for ultrasonic non-destructive testing of a system hidden by obstacles. While the parallel translation resembles light refraction through a parallel plate without angle deviation between entry and exit beams, this wave behavior cannot be achieved without the engineered metamaterial because an elastic wave incident upon a dissimilar medium is always split at different refraction angles into two different modes, longitudinal and shear.

  12. Computational strategies for three-dimensional flow simulations on distributed computer systems. Ph.D. Thesis Semiannual Status Report, 15 Aug. 1993 - 15 Feb. 1994

    NASA Technical Reports Server (NTRS)

    Weed, Richard Allen; Sankar, L. N.

    1994-01-01

    An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.

  13. Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the actual tool descriptions, we will present the considerations that led to the particular design choices. Many of these are motivated by the requirement that Charon must be useful within the traditional computational environments of Fortran 77 and C. Only the Fortran 77 syntax will be presented in this report.

  14. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  15. Automotive applications of superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ginsberg, M.

    1987-01-01

    These proceedings compile papers on supercomputers in the automobile industry. Titles include: An automotive engineer's guide to the effective use of scalar, vector, and parallel computers; fluid mechanics, finite elements, and supercomputers; and Automotive crashworthiness performance on a supercomputer.

  16. Optimizing SIEM Throughput on the Cloud Using Parallelization

    PubMed Central

    Alam, Masoom; Ihsan, Asif; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, M Khurram; Farooq, Sajid

    2016-01-01

    Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage. PMID:27851762

  17. Multidisciplinary Design Optimization (MDO) Methods: Their Synergy with Computer Technology in Design Process

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1998-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.

  18. Estimation of CO2 reduction by parallel hard-type power hybridization for gasoline and diesel vehicles.

    PubMed

    Oh, Yunjung; Park, Junhong; Lee, Jong Tae; Seo, Jigu; Park, Sungwook

    2017-10-01

    The purpose of this study is to investigate possible improvements in ICEVs by implementing fuzzy logic-based parallel hard-type power hybrid systems. Two types of conventional ICEVs (gasoline and diesel) and two types of HEVs (gasoline-electric, diesel electric) were generated using vehicle and powertrain simulation tools and a Matlab-Simulink application programming interface. For gasoline and gasoline-electric HEV vehicles, the prediction accuracy for four types of LDV models was validated by conducting comparative analysis with the chassis dynamometer and OBD test data. The predicted results show strong correlation with the test data. The operating points of internal combustion engines and electric motors are well controlled in the high efficiency region and battery SOC was well controlled within ±1.6%. However, for diesel vehicles, we generated virtual diesel-electric HEV vehicle because there is no available vehicles with similar engine and vehicle specifications with ICE vehicle. Using a fuzzy logic-based parallel hybrid system in conventional ICEVs demonstrated that HEVs showed superior performance in terms of fuel consumption and CO 2 emission in most driving modes. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine, NASA Advanced Air Vehicles Program - Commercial Supersonic Technology Project - AeroServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Cheng, Larry

    2015-01-01

    This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  20. Topology optimisation for natural convection problems

    NASA Astrophysics Data System (ADS)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole

    2014-12-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.

  1. Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.

    PubMed

    Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D

    2011-12-12

    We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America

  2. Development of natural gas rotary engines

    NASA Astrophysics Data System (ADS)

    Mack, J. R.

    1991-08-01

    Development of natural gas-fueled rotary engines was pursued on the parallel paths of converted Mazda automotive engines and of establishing technology and demonstration of a test model of a larger John Deer Technologies Incorporated (JDTI) rotary engine with power capability of 250 HP per power section for future production of multi-rotor engines with power ratings 250, 500, and 1000 HP and upward. Mazda engines were converted to natural gas and were characterized by a laboratory which was followed by nearly 12,000 hours of testing in three different field installations. To develop technology for the larger JDTI engine, laboratory and engine materials testing was accomplished. Extensive combustion analysis computer codes were modified, verified, and utilized to predict engine performance, to guide parameters for actual engine design, and to identify further improvements. A single rotor test engine of 5.8 liter displacement was designed for natural gas operation based on the JDTI 580 engine series. This engine was built and tested. It ran well and essentially achieved predicted performance. Lean combustion and low NOW emission were demonstrated.

  3. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  4. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  5. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  6. Aesthetics and ethics in engineering: insights from Polanyi.

    PubMed

    Dias, Priyan

    2011-06-01

    Polanyi insisted that scientific knowledge was intensely personal in nature, though held with universal intent. His insights regarding the personal values of beauty and morality in science are first enunciated. These are then explored for their relevance to engineering. It is shown that the practice of engineering is also governed by aesthetics and ethics. For example, Polanyi's three spheres of morality in science--that of the individual scientist, the scientific community and the wider society--has parallel entities in engineering. The existence of shared values in engineering is also demonstrated, in aesthetics through an example that shows convergence of practitioner opinion to solutions that represent accepted models of aesthetics; and in ethics through the recognition that many professional engineering institutions hold that the safety of the public supersedes the interests of the client. Such professional consensus can be seen as justification for studying engineering aesthetics and ethics as inter-subjective disciplines.

  7. Programmable stream prefetch with resource optimization

    DOEpatents

    Boyle, Peter; Christ, Norman; Gara, Alan; Mawhinney, Robert; Ohmacht, Martin; Sugavanam, Krishnan

    2013-01-08

    A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed.

  8. Rocket Engine Numerical Simulator (RENS)

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.

    1997-01-01

    Work is being done at three universities to help today's NASA engineers use the knowledge and experience of their Apolloera predecessors in designing liquid rocket engines. Ground-breaking work is being done in important subject areas to create a prototype of the most important functions for the Rocket Engine Numerical Simulator (RENS). The goal of RENS is to develop an interactive, realtime application that engineers can utilize for comprehensive preliminary propulsion system design functions. RENS will employ computer science and artificial intelligence research in knowledge acquisition, computer code parallelization and objectification, expert system architecture design, and object-oriented programming. In 1995, a 3year grant from the NASA Lewis Research Center was awarded to Dr. Douglas Moreman and Dr. John Dyer of Southern University at Baton Rouge, Louisiana, to begin acquiring knowledge in liquid rocket propulsion systems. Resources of the University of West Florida in Pensacola were enlisted to begin the process of enlisting knowledge from senior NASA engineers who are recognized experts in liquid rocket engine propulsion systems. Dr. John Coffey of the University of West Florida is utilizing his expertise in interviewing and concept mapping techniques to encode, classify, and integrate information obtained through personal interviews. The expertise extracted from the NASA engineers has been put into concept maps with supporting textual, audio, graphic, and video material. A fundamental concept map was delivered by the end of the first year of work and the development of maps containing increasing amounts of information is continuing. Find out more information about this work at the Southern University/University of West Florida. In 1996, the Southern University/University of West Florida team conducted a 4day group interview with a panel of five experts to discuss failures of the RL10 rocket engine in conjunction with the Centaur launch vehicle. The discussion was recorded on video and audio tape. Transcriptions of the entire proceedings and an abbreviated video presentation of the discussion highlights are under development. Also in 1996, two additional 3year grants were awarded to conduct parallel efforts that would complement the work being done by Southern University and the University of West Florida. Dr. Prem Bhalla of Jackson State University in Jackson, Mississippi, is developing the architectural framework for RENS. By employing the Rose Rational language and Booch Object Oriented Programming (OOP) technology, Dr. Bhalla is developing the basic structure of RENS by identifying and encoding propulsion system components, their individual characteristics, and cross-functionality and dependencies. Dr. Ruknet Cezzar of Hampton University, located in Hampton, Virginia, began working on the parallelization and objectification of rocket engine analysis and design codes. Dr. Cezzar will use the Turbo C++ OOP language to translate important liquid rocket engine computer codes from FORTRAN and permit their inclusion into the RENS framework being developed at Jackson State University. The Southern University/University of West Florida grant was extended by 1 year to coordinate the conclusion of all three efforts in 1999.

  9. Optimising ICT Effectiveness in Instruction and Learning: Multilevel Transformation Theory and a Pilot Project in Secondary Education

    ERIC Educational Resources Information Center

    Mooij, Ton

    2004-01-01

    Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…

  10. Estimation of apparent rate coefficients for radionuclides interacting with marine sediments from Novaya Zemlya.

    PubMed

    Børretzen, P; Salbu, B

    2000-10-30

    To assess the impact of radionuclides entering the marine environment from dumped nuclear waste, information on the physico-chemical forms of radionuclides and their mobility in seawater-sediment systems is essential. Due to interactions with sediment components, sediments may act as a sink, reducing the mobility of radionuclides in seawater. Due to remobilisation, however, contaminated sediments may also act as a potential source of radionuclides to the water phase. In the present work, time-dependent interactions of low molecular mass (LMM, i.e. species < 10 kDa) radionuclides with sediments from the Stepovogo Fjord, Novaya Zemlya and their influence on the distribution coefficients (Kd values) have been studied in tracer experiments using 109Cd2+ and 60Co2+ as gamma tracers. Sorption of the LMM tracers occurred rapidly and the estimated equilibrium Kd(eq)-values for 109Cd and 60Co were 500 and 20000 ml/g, respectively. Remobilisation of 109Cd and 60Co from contaminated sediment fractions as a function of contact time was studied using sequential extraction procedures. Due to redistribution, the reversibly bound fraction of the gamma tracers decreased with time, while the irreversibly (or slowly reversibly) associated fraction of the gamma tracers increased. Two different three-compartment models, one consecutive and one parallel, were applied to describe the time-dependent interaction of the LMM tracers with operationally defined reversible and irreversible (or slowly reversible) sediment fractions. The interactions between these fractions were described using first order differential equations. By fitting the models to the experimental data, apparent rate constants were obtained using numerical optimisation software. The model optimisations showed that the interactions of LMM 60Co were well described by the consecutive model, while the parallel model was more suitable to describe the interactions of LMM 109Cd with the sediments, when the squared sum of residuals were compared. The rate of sorption of the irreversibly (or slowly reversibly) associated fraction was greater than the rate of desorption of the reversibly bound fractions (i.e. k3 > k2) for both radionuclides. Thus, the Novaya Zemlya sediment are supposed to act as a sink for the radionuclides under oxic conditions, and transport to the water phase should mainly be attributed to resuspended particles.

  11. Semiannual report, 1 April - 30 September 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software for parallel computers. Research in these areas is discussed.

  12. Assessment of Self-Efficacy in Systems Engineering as an Indicator of Competency Level Achievement

    DTIC Science & Technology

    2014-06-01

    11  C.  RESEARCH ON SELF-EFFICACY IN INFORMATION TECHNOLOGY —A PARALLEL TO SYSTEMS ENGINEERING ......13  1.  Stakeholder Analysis...Expectancy value theory FA Factor analysis IT Information technology KSA Knowledge, skills, abilities MIS Management information systems NPS...item in particular reflected statistically significant pre- and post-survey results at p<.001, which was the student’s ability to pick a technology for

  13. Dynamic Imbalance Would Counter Offcenter Thrust

    NASA Technical Reports Server (NTRS)

    Mccanna, Jason

    1994-01-01

    Dynamic imbalance generated by offcenter thrust on rotating body eliminated by shifting some of mass of body to generate opposing dynamic imbalance. Technique proposed originally for spacecraft including massive crew module connected via long, lightweight intermediate structure to massive engine module, such that artificial gravitation in crew module generated by rotating spacecraft around axis parallel to thrust generated by engine. Also applicable to dynamic balancing of rotating terrestrial equipment to which offcenter forces applied.

  14. A Higher-Order Trapezoidal Vector Vortex Panel for Subsonic Flow.

    DTIC Science & Technology

    1980-12-01

    Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In Partial Fulfillment of the...Requirements for the Degree of Master of Science by Ronald E. Luther, B.S. Capt USAF Graduate Aeronautical Engineering December 1980 Approved for public... methd also permits analysis of cranked leading and/or trailiig edges. The root edge, tip edge and all chordwise boundaries are parallel to the x-axis

  15. Feedback control methods for drug dosage optimisation. Concepts, classification and clinical application.

    PubMed

    Vozeh, S; Steimer, J L

    1985-01-01

    The concept of feedback control methods for drug dosage optimisation is described from the viewpoint of control theory. The control system consists of 5 parts: (a) patient (the controlled process); (b) response (the measured feedback); (c) model (the mathematical description of the process); (d) adaptor (to update the parameters); and (e) controller (to determine optimum dosing strategy). In addition to the conventional distinction between open-loop and closed-loop control systems, a classification is proposed for dosage optimisation techniques which distinguishes between tight-loop and loose-loop methods depending on whether physician's interaction is absent or included as part of the control step. Unlike engineering problems where the process can usually be controlled by fully automated devices, therapeutic situations often require that the physician be included in the decision-making process to determine the 'optimal' dosing strategy. Tight-loop and loose-loop methods can be further divided into adaptive and non-adaptive, depending on the presence of the adaptor. The main application areas of tight-loop feedback control methods are general anaesthesia, control of blood pressure, and insulin delivery devices. Loose-loop feedback methods have been used for oral anticoagulation and in therapeutic drug monitoring. The methodology, advantages and limitations of the different approaches are reviewed. A general feature common to all application areas could be observed: to perform well under routine clinical conditions, which are characterised by large interpatient variability and sometimes also intrapatient changes, control systems should be adaptive. Apart from application in routine drug treatment, feedback control methods represent an important research tool. They can be applied for the investigation of pathophysiological and pharmacodynamic processes. A most promising application is the evaluation of the relationship between an intermediate response (e.g. drug level), which is often used as feedback for dosage adjustment, and the final therapeutic goal.

  16. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    NASA Astrophysics Data System (ADS)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  17. Laser polishing of 3D printed mesoscale components

    NASA Astrophysics Data System (ADS)

    Bhaduri, Debajyoti; Penchev, Pavel; Batal, Afif; Dimov, Stefan; Soo, Sein Leung; Sten, Stella; Harrysson, Urban; Zhang, Zhenxue; Dong, Hanshan

    2017-05-01

    Laser polishing of various engineered materials such as glass, silica, steel, nickel and titanium alloys, has attracted considerable interest in the last 20 years due to its superior flexibility, operating speed and capability for localised surface treatment compared to conventional mechanical based methods. The paper initially reports results from process optimisation experiments aimed at investigating the influence of laser fluence and pulse overlap parameters on resulting workpiece surface roughness following laser polishing of planar 3D printed stainless steel (SS316L) specimens. A maximum reduction in roughness of over 94% (from ∼3.8 to ∼0.2 μm Sa) was achieved at the optimised settings (fluence of 9 J/cm2 and overlap factors of 95% and 88-91% along beam scanning and step-over directions respectively). Subsequent analysis using both X-ray photoelectron spectroscopy (XPS) and glow discharge optical emission spectroscopy (GDOES) confirmed the presence of surface oxide layers (predominantly consisting of Fe and Cr phases) up to a depth of ∼0.5 μm when laser polishing was performed under normal atmospheric conditions. Conversely, formation of oxide layers was negligible when operating in an inert argon gas environment. The microhardness of the polished specimens was primarily influenced by the input thermal energy, with greater sub-surface hardness (up to ∼60%) recorded in the samples processed with higher energy density. Additionally, all of the polished surfaces were free of the scratch marks, pits, holes, lumps and irregularities that were prevalent on the as-received stainless steel samples. The optimised laser polishing technology was consequently implemented for serial finishing of structured 3D printed mesoscale SS316L components. This led to substantial reductions in areal Sa and St parameters by 75% (0.489-0.126 μm) and 90% (17.71-1.21 μm) respectively, without compromising the geometrical accuracy of the native 3D printed samples.

  18. Review of sample preparation strategies for MS-based metabolomic studies in industrial biotechnology.

    PubMed

    Causon, Tim J; Hann, Stephan

    2016-09-28

    Fermentation and cell culture biotechnology in the form of so-called "cell factories" now play an increasingly significant role in production of both large (e.g. proteins, biopharmaceuticals) and small organic molecules for a wide variety of applications. However, associated metabolic engineering optimisation processes relying on genetic modification of organisms used in cell factories, or alteration of production conditions remain a challenging undertaking for improving the final yield and quality of cell factory products. In addition to genomic, transcriptomic and proteomic workflows, analytical metabolomics continues to play a critical role in studying detailed aspects of critical pathways (e.g. via targeted quantification of metabolites), identification of biosynthetic intermediates, and also for phenotype differentiation and the elucidation of previously unknown pathways (e.g. via non-targeted strategies). However, the diversity of primary and secondary metabolites and the broad concentration ranges encompassed during typical biotechnological processes means that simultaneous extraction and robust analytical determination of all parts of interest of the metabolome is effectively impossible. As the integration of metabolome data with transcriptome and proteome data is an essential goal of both targeted and non-targeted methods addressing production optimisation goals, additional sample preparation steps beyond necessary sampling, quenching and extraction protocols including clean-up, analyte enrichment, and derivatisation are important considerations for some classes of metabolites, especially those present in low concentrations or exhibiting poor stability. This contribution critically assesses the potential of current sample preparation strategies applied in metabolomic studies of industrially-relevant cell factory organisms using mass spectrometry-based platforms primarily coupled to liquid-phase sample introduction (i.e. flow injection, liquid chromatography, or capillary electrophoresis). Particular focus is placed on the selectivity and degree of enrichment attainable, as well as demands of speed, absolute quantification, robustness and, ultimately, consideration of fully-integrated bioanalytical solutions to optimise sample handling and throughput. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Simulation/Emulation Techniques: Compressing Schedules With Parallel (HW/SW) Development

    NASA Technical Reports Server (NTRS)

    Mangieri, Mark L.; Hoang, June

    2014-01-01

    NASA has always been in the business of balancing new technologies and techniques to achieve human space travel objectives. NASA's Kedalion engineering analysis lab has been validating and using many contemporary avionics HW/SW development and integration techniques, which represent new paradigms to NASA's heritage culture. Kedalion has validated many of the Orion HW/SW engineering techniques borrowed from the adjacent commercial aircraft avionics solution space, inserting new techniques and skills into the Multi - Purpose Crew Vehicle (MPCV) Orion program. Using contemporary agile techniques, Commercial-off-the-shelf (COTS) products, early rapid prototyping, in-house expertise and tools, and extensive use of simulators and emulators, NASA has achieved cost effective paradigms that are currently serving the Orion program effectively. Elements of long lead custom hardware on the Orion program have necessitated early use of simulators and emulators in advance of deliverable hardware to achieve parallel design and development on a compressed schedule.

  20. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    This research program deals with the application of high-performance computing methods for the analysis of complete jet engines. We have entitled this program by applying the two dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition, and solution capabilities were successfully tested. We then focused attention on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion that results from these structural displacements. This is treated by a new arbitrary Lagrangian-Eulerian (ALE) technique that models the fluid mesh motion as that of a fictitious mass-spring network. New partitioned analysis procedures to treat this coupled three-component problem are developed. These procedures involved delayed corrections and subcycling. Preliminary results on the stability, accuracy, and MPP computational efficiency are reported.

  1. Reliability of clinical impact grading by healthcare professionals of common prescribing error and optimisation cases in critical care patients.

    PubMed

    Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H

    2017-04-01

    To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  2. Further two-dimensional code development for Stirling space engine components

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir; Tew, Roy C.; Dudenhoefer, James E.

    1990-01-01

    The development of multidimensional models of Stirling engine components is described. Two-dimensional parallel plate models of an engine regenerator and a cooler were used to study heat transfer under conditions of laminar, incompressible oscillating flow. Substantial differences in the nature of the temperature variations in time over the cycle were observed for the cooler as contrasted with the regenerator. When the two-dimensional cooler model was used to calculate a heat transfer coefficient, it yields a very different result from that calculated using steady-flow correlations. Simulation results for the regenerator and the cooler are presented.

  3. Combustion Control System Design of Diesel Engine via ASPR based Output Feedback Control Strategy with a PFC

    NASA Astrophysics Data System (ADS)

    Mizumoto, Ikuro; Tsunematsu, Junpei; Fujii, Seiya

    2016-09-01

    In this paper, a design method of an output feedback control system with a simple feedforward input for a combustion model of diesel engine will be proposed based on the almost strictly positive real-ness (ASPR-ness) of the controlled system for a combustion control of diesel engines. A parallel feedforward compensator (PFC) design scheme which renders the resulting augmented controlled system ASPR will also be proposed in order to design a stable output feedback control system for the considered combustion model. The effectiveness of our proposed method will be confirmed through numerical simulations.

  4. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-02-01

    The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.

  5. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80

    NASA Technical Reports Server (NTRS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-01-01

    The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.

  6. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  7. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  8. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  9. An effective pseudospectral method for constraint dynamic optimisation problems with characteristic times

    NASA Astrophysics Data System (ADS)

    Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin

    2018-03-01

    Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.

  10. Medicines optimisation: priorities and challenges.

    PubMed

    Kaufman, Gerri

    2016-03-23

    Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.

  11. Parallel steady state studies on a milliliter scale accelerate fed-batch bioprocess design for recombinant protein production with Escherichia coli.

    PubMed

    Schmideder, Andreas; Cremer, Johannes H; Weuster-Botz, Dirk

    2016-11-01

    In general, fed-batch processes are applied for recombinant protein production with Escherichia coli (E. coli). However, state of the art methods for identifying suitable reaction conditions suffer from severe drawbacks, i.e. direct transfer of process information from parallel batch studies is often defective and sequential fed-batch studies are time-consuming and cost-intensive. In this study, continuously operated stirred-tank reactors on a milliliter scale were applied to identify suitable reaction conditions for fed-batch processes. Isopropyl β-d-1-thiogalactopyranoside (IPTG) induction strategies were varied in parallel-operated stirred-tank bioreactors to study the effects on the continuous production of the recombinant protein photoactivatable mCherry (PAmCherry) with E. coli. Best-performing induction strategies were transferred from the continuous processes on a milliliter scale to liter scale fed-batch processes. Inducing recombinant protein expression by dynamically increasing the IPTG concentration to 100 µM led to an increase in the product concentration of 21% (8.4 g L -1 ) compared to an implemented high-performance production process with the most frequently applied induction strategy by a single addition of 1000 µM IPGT. Thus, identifying feasible reaction conditions for fed-batch processes in parallel continuous studies on a milliliter scale was shown to be a powerful, novel method to accelerate bioprocess design in a cost-reducing manner. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1426-1435, 2016. © 2016 American Institute of Chemical Engineers.

  12. Multi-cylinder hot gas engine

    DOEpatents

    Corey, John A.

    1985-01-01

    A multi-cylinder hot gas engine having an equal angle, V-shaped engine block in which two banks of parallel, equal length, equally sized cylinders are formed together with annular regenerator/cooler units surrounding each cylinder, and wherein the pistons are connected to a single crankshaft. The hot gas engine further includes an annular heater head disposed around a central circular combustor volume having a new balanced-flow hot-working-fluid manifold assembly that provides optimum balanced flow of the working fluid through the heater head working fluid passageways which are connected between each of the cylinders and their respective associated annular regenerator units. This balanced flow provides even heater head temperatures and, therefore, maximum average working fluid temperature for best operating efficiency with the use of a single crankshaft V-shaped engine block.

  13. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  14. Optimised collision avoidance for an ultra-close rendezvous with a failed satellite based on the Gauss pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue

    2016-11-01

    This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.

  15. Developing software to use parallel processing effectively. Final report, June-December 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Center, J.

    1988-10-01

    This report describes the difficulties involved in writing efficient parallel programs and describes the hardware and software support currently available for generating software that utilizes processing effectively. Historically, the processing rate of single-processor computers has increased by one order of magnitude every five years. However, this pace is slowing since electronic circuitry is coming up against physical barriers. Unfortunately, the complexity of engineering and research problems continues to require ever more processing power (far in excess of the maximum estimated 3 Gflops achievable by single-processor computers). For this reason, parallel-processing architectures are receiving considerable interest, since they offer high performancemore » more cheaply than a single-processor supercomputer, such as the Cray.« less

  16. LFRic: Building a new Unified Model

    NASA Astrophysics Data System (ADS)

    Melvin, Thomas; Mullerworth, Steve; Ford, Rupert; Maynard, Chris; Hobson, Mike

    2017-04-01

    The LFRic project, named for Lewis Fry Richardson, aims to develop a replacement for the Met Office Unified Model in order to meet the challenges which will be presented by the next generation of exascale supercomputers. This project, a collaboration between the Met Office, STFC Daresbury and the University of Manchester, builds on the earlier GungHo project to redesign the dynamical core, in partnership with NERC. The new atmospheric model aims to retain the performance of the current ENDGame dynamical core and associated subgrid physics, while also enabling a far greater scalability and flexibility to accommodate future supercomputer architectures. Design of the model revolves around a principle of a 'separation of concerns', whereby the natural science aspects of the code can be developed without worrying about the underlying architecture, while machine dependent optimisations can be carried out at a high level. These principles are put into practice through the development of an autogenerated Parallel Systems software layer (known as the PSy layer) using a domain-specific compiler called PSyclone. The prototype model includes a re-write of the dynamical core using a mixed finite element method, in which different function spaces are used to represent the various fields. It is able to run in parallel with MPI and OpenMP and has been tested on over 200,000 cores. In this talk an overview of the both the natural science and computational science implementations of the model will be presented.

  17. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  18. Remote direct memory access

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.

    2012-12-11

    Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.

  19. 46 CFR 111.05-13 - Grounding connection.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Grounding connection. 111.05-13 Section 111.05-13 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... power sources operating in parallel in the system. ...

  20. 46 CFR 111.05-13 - Grounding connection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Grounding connection. 111.05-13 Section 111.05-13 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... power sources operating in parallel in the system. ...

  1. 46 CFR 111.05-13 - Grounding connection.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Grounding connection. 111.05-13 Section 111.05-13 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... power sources operating in parallel in the system. ...

  2. 46 CFR 111.05-13 - Grounding connection.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Grounding connection. 111.05-13 Section 111.05-13 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... power sources operating in parallel in the system. ...

  3. Improving target coverage and organ-at-risk sparing in intensity-modulated radiotherapy for cervical oesophageal cancer using a simple optimisation method.

    PubMed

    Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi

    2015-01-01

    To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.

  4. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  5. Simulating and analyzing engineering parameters of Kyushu Earthquake, Japan, 1997, by empirical Green function method

    NASA Astrophysics Data System (ADS)

    Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei

    2017-03-01

    Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.

  6. Study of solid rocket motors for a space shuttle booster. Volume 2, book 1: Analysis and design

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An analysis of the factors which determined the selection of the solid rocket propellant engines for the space shuttle booster is presented. The 156 inch diameter, parallel burn engine was selected because of its transportability, cost effectiveness, and reliability. Other factors which caused favorable consideration are: (1) recovery and reuse are feasible and offer substantial cost savings, (2) abort can be easily accomplished. and (3) ecological effects are acceptable.

  7. Static Performance of a Wing-Mounted Thrust Reverser Concept

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.; Yetter, Jeffrey A.

    1998-01-01

    An experimental investigation was conducted in the Jet-Exit Test Facility at NASA Langley Research Center to study the static aerodynamic performance of a wing-mounted thrust reverser concept applicable to subsonic transport aircraft. This innovative engine powered thrust reverser system is designed to utilize wing-mounted flow deflectors to produce aircraft deceleration forces. Testing was conducted using a 7.9%-scale exhaust system model with a fan-to-core bypass ratio of approximately 9.0, a supercritical left-hand wing section attached via a pylon, and wing-mounted flow deflectors attached to the wing section. Geometric variations of key design parameters investigated for the wing-mounted thrust reverser concept included flow deflector angle and chord length, deflector edge fences, and the yaw mount angle of the deflector system (normal to the engine centerline or parallel to the wing trailing edge). All tests were conducted with no external flow and high pressure air was used to simulate core and fan engine exhaust flows. Test results indicate that the wing-mounted thrust reverser concept can achieve overall thrust reverser effectiveness levels competitive with (parallel mount), or better than (normal mount) a conventional cascade thrust reverser system. By removing the thrust reverser system from the nacelle, the wing-mounted concept offers the nacelle designer more options for improving nacelle aero dynamics and propulsion-airframe integration, simplifying nacelle structural designs, reducing nacelle weight, and improving engine maintenance access.

  8. Engineered plant biomass feedstock particles

    DOEpatents

    Dooley, James H [Federal Way, WA; Lanning, David N [Federal Way, WA; Broderick, Thomas F [Lake Forest Park, WA

    2012-04-17

    A new class of plant biomass feedstock particles characterized by consistent piece size and shape uniformity, high skeletal surface area, and good flow properties. The particles of plant biomass material having fibers aligned in a grain are characterized by a length dimension (L) aligned substantially parallel to the grain and defining a substantially uniform distance along the grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) normal to W and L. In particular, the L.times.H dimensions define a pair of substantially parallel side surfaces characterized by substantially intact longitudinally arrayed fibers, the W.times.H dimensions define a pair of substantially parallel end surfaces characterized by crosscut fibers and end checking between fibers, and the L.times.W dimensions define a pair of substantially parallel top and bottom surfaces. The L.times.W surfaces of particles with L/H dimension ratios of 4:1 or less are further elaborated by surface checking between longitudinally arrayed fibers. The length dimension L is preferably aligned within 30.degree. parallel to the grain, and more preferably within 10.degree. parallel to the grain. The plant biomass material is preferably selected from among wood, agricultural crop residues, plantation grasses, hemp, bagasse, and bamboo.

  9. PyPele Rewritten To Use MPI

    NASA Technical Reports Server (NTRS)

    Hockney, George; Lee, Seungwon

    2008-01-01

    A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.

  10. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Baxter, Doug

    1988-01-01

    The class of problems that can be effectively compiled by parallelizing compilers is discussed. This is accomplished with the doconsider construct which would allow these compilers to parallelize many problems in which substantial loop-level parallelism is available but cannot be detected by standard compile-time analysis. We describe and experimentally analyze mechanisms used to parallelize the work required for these types of loops. In each of these methods, a new loop structure is produced by modifying the loop to be parallelized. We also present the rules by which these loop transformations may be automated in order that they be included in language compilers. The main application area of the research involves problems in scientific computations and engineering. The workload used in our experiment includes a mixture of real problems as well as synthetically generated inputs. From our extensive tests on the Encore Multimax/320, we have reached the conclusion that for the types of workloads we have investigated, self-execution almost always performs better than pre-scheduling. Further, the improvement in performance that accrues as a result of global topological sorting of indices as opposed to the less expensive local sorting, is not very significant in the case of self-execution.

  11. Parallel methodology to capture cyclic variability in motored engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei

    2016-07-28

    Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less

  12. Compute as Fast as the Engineers Can Think! ULTRAFAST COMPUTING TEAM FINAL REPORT

    NASA Technical Reports Server (NTRS)

    Biedron, R. T.; Mehrotra, P.; Nelson, M. L.; Preston, M. L.; Rehder, J. J.; Rogersm J. L.; Rudy, D. H.; Sobieski, J.; Storaasli, O. O.

    1999-01-01

    This report documents findings and recommendations by the Ultrafast Computing Team (UCT). In the period 10-12/98, UCT reviewed design case scenarios for a supersonic transport and a reusable launch vehicle to derive computing requirements necessary for support of a design process with efficiency so radically improved that human thought rather than the computer paces the process. Assessment of the present computing capability against the above requirements indicated a need for further improvement in computing speed by several orders of magnitude to reduce time to solution from tens of hours to seconds in major applications. Evaluation of the trends in computer technology revealed a potential to attain the postulated improvement by further increases of single processor performance combined with massively parallel processing in a heterogeneous environment. However, utilization of massively parallel processing to its full capability will require redevelopment of the engineering analysis and optimization methods, including invention of new paradigms. To that end UCT recommends initiation of a new activity at LaRC called Computational Engineering for development of new methods and tools geared to the new computer architectures in disciplines, their coordination, and validation and benefit demonstration through applications.

  13. On parallel hybrid-electric propulsion system for unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Hung, J. Y.; Gonzalez, L. F.

    2012-05-01

    This paper presents a review of existing and current developments and the analysis of Hybrid-Electric Propulsion Systems (HEPS) for small fixed-wing Unmanned Aerial Vehicles (UAVs). Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. One technology with potential in this area is with the use of HEPS. In this paper, information on the state-of-art technology in this field of research is provided. A description and simulation of a parallel HEPS for a small fixed-wing UAV by incorporating an Ideal Operating Line (IOL) control strategy is described. Simulation models of the components in a HEPS were designed in the MATLAB Simulink environment. An IOL analysis of an UAV piston engine was used to determine the most efficient points of operation for this engine. The results show that an UAV equipped with this HEPS configuration is capable of achieving a fuel saving of 6.5%, compared to the engine-only configuration.

  14. Parallelization of Rocket Engine Simulator Software (P.R.E.S.S.)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1999-01-01

    Parallelization of Rocket Engine System Software (PRESS) project is part of a collaborative effort with Southern University at Baton Rouge (SUBR), University of West Florida (UWF), and Jackson State University (JSU). The project has started on October 19, 1995, and after a three-year period corresponding to project phases and fiscal-year funding by NASA Lewis Research Center (now Glenn Research Center), has ended on October 18, 1998. The one-year no-cost extension period was granted on June 7, 1998, until October 19, 1999. The aim of this one year no-cost extension period was to carry out further research to complete the work and lay the groundwork for subsequent research in the area of aerospace engine design optimization software tools. The previous progress for the research has been reported in great detail in respective interim and final research progress reports, seven of them, in all. While the purpose of this report is to be a final summary and an valuative view of the entire work since the first year funding, the following is a quick recap of the most important sections of the interim report dated April 30, 1999.

  15. Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project

    DTIC Science & Technology

    2005-05-01

    thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of

  16. From SED HI concept to Pleiades FM detection unit measurements

    NASA Astrophysics Data System (ADS)

    Renard, Christophe; Dantes, Didier; Neveu, Claude; Lamard, Jean-Luc; Oudinot, Matthieu; Materne, Alex

    2017-11-01

    The first flight model PLEIADES high resolution instrument under Thales Alenia Space development, on behalf of CNES, is currently in integration and test phases. Based on the SED HI detection unit concept, PLEIADES detection unit has been fully qualified before the integration at telescope level. The main radiometric performances have been measured on engineering and first flight models. This paper presents the results of performances obtained on the both models. After a recall of the SED HI concept, the design and performances of the main elements (charge coupled detectors, focal plane and video processing unit), detection unit radiometric performances are presented and compared to the instrument specifications for the panchromatic and multispectral bands. The performances treated are the following: - video signal characteristics, - dark signal level and dark signal non uniformity, - photo-response non uniformity, - non linearity and differential non linearity, - temporal and spatial noises regarding system definitions PLEIADES detection unit allows tuning of different functions: reference and sampling time positioning, anti-blooming level, gain value, TDI line number. These parameters are presented with their associated criteria of optimisation to achieve system radiometric performances and their sensitivities on radiometric performances. All the results of the measurements performed by Thales Alenia Space on the PLEIADES detection units demonstrate the high potential of the SED HI concept for Earth high resolution observation system allowing optimised performances at instrument and satellite levels.

  17. LMC: Logarithmantic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2017-06-01

    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  18. Attachment of lead wires to thin film thermocouples mounted on high temperature materials using the parallel gap welding process

    NASA Technical Reports Server (NTRS)

    Holanda, Raymond; Kim, Walter S.; Pencil, Eric; Groth, Mary; Danzey, Gerald A.

    1990-01-01

    Parallel gap resistance welding was used to attach lead wires to sputtered thin film sensors. Ranges of optimum welding parameters to produce an acceptable weld were determined. The thin film sensors were Pt13Rh/Pt thermocouples; they were mounted on substrates of MCrAlY-coated superalloys, aluminum oxide, silicon carbide and silicon nitride. The entire sensor system is designed to be used on aircraft engine parts. These sensor systems, including the thin-film-to-lead-wire connectors, were tested to 1000 C.

  19. System software for the finite element machine

    NASA Technical Reports Server (NTRS)

    Crockett, T. W.; Knott, J. D.

    1985-01-01

    The Finite Element Machine is an experimental parallel computer developed at Langley Research Center to investigate the application of concurrent processing to structural engineering analysis. This report describes system-level software which has been developed to facilitate use of the machine by applications researchers. The overall software design is outlined, and several important parallel processing issues are discussed in detail, including processor management, communication, synchronization, and input/output. Based on experience using the system, the hardware architecture and software design are critiqued, and areas for further work are suggested.

  20. Communicating remote sensing concepts in an interdisciplinary environment

    NASA Technical Reports Server (NTRS)

    Chung, R.

    1981-01-01

    Although remote sensing is currently multidisciplinary in its applications, many of its terms come from the engineering sciences, particularly from the field of pattern recognition. Scholars from fields such as the social sciences, botany, and biology, may experience initial difficulty with remote sensing terminology, even though parallel concepts exist in their own fields. Some parallel concepts and terminologies from nonengineering fields, which might enhance the understanding of remote sensing concepts in an interdisciplinary situation are identified. Feedbacks which this analogue strategy might have on remote sensing itself are explored.

  1. How can systems engineering inform the methods of programme evaluation in health professions education?

    PubMed

    Rojas, David; Grierson, Lawrence; Mylopoulos, Maria; Trbovich, Patricia; Bagli, Darius; Brydges, Ryan

    2018-04-01

    We evaluate programmes in health professions education (HPE) to determine their effectiveness and value. Programme evaluation has evolved from use of reductionist frameworks to those addressing the complex interactions between programme factors. Researchers in HPE have recently suggested a 'holistic programme evaluation' aiming to better describe and understand the implications of 'emergent processes and outcomes'. We propose a programme evaluation framework informed by principles and tools from systems engineering. Systems engineers conceptualise complexity and emergent elements in unique ways that may complement and extend contemporary programme evaluations in HPE. We demonstrate how the abstract decomposition space (ADS), an engineering knowledge elicitation tool, provides the foundation for a systems engineering informed programme evaluation designed to capture both planned and emergent programme elements. We translate the ADS tool to use education-oriented language, and describe how evaluators can use it to create a programme-specific ADS through iterative refinement. We provide a conceptualisation of emergent elements and an equation that evaluators can use to identify the emergent elements in their programme. Using our framework, evaluators can analyse programmes not as isolated units with planned processes and planned outcomes, but as unfolding, complex interactive systems that will exhibit emergent processes and emergent outcomes. Subsequent analysis of these emergent elements will inform the evaluator as they seek to optimise and improve the programme. Our proposed systems engineering informed programme evaluation framework provides principles and tools for analysing the implications of planned and emergent elements, as well as their potential interactions. We acknowledge that our framework is preliminary and will require application and constant refinement. We suggest that our framework will also advance our understanding of the construct of 'emergence' in HPE research. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  2. Economical launching and accelerating control strategy for a single-shaft parallel hybrid electric bus

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Song, Jian; Li, Liang; Li, Shengbo; Cao, Dongpu

    2016-08-01

    This paper presents an economical launching and accelerating mode, including four ordered phases: pure electrical driving, clutch engagement and engine start-up, engine active charging, and engine driving, which can be fit for the alternating conditions and improve the fuel economy of hybrid electric bus (HEB) during typical city-bus driving scenarios. By utilizing the fast response feature of electric motor (EM), an adaptive controller for EM is designed to realize the power demand during the pure electrical driving mode, the engine starting mode and the engine active charging mode. Concurrently, the smoothness issue induced by the sequential mode transitions is solved with a coordinated control logic for engine, EM and clutch. Simulation and experimental results show that the proposed launching and accelerating mode and its control methods are effective in improving the fuel economy and ensure the drivability during the fast transition between the operation modes of HEB.

  3. A Method for Decentralised Optimisation in Networks

    NASA Astrophysics Data System (ADS)

    Saramäki, Jari

    2005-06-01

    We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.

  4. Distributed convex optimisation with event-triggered communication in networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Jiayun; Chen, Weisheng

    2016-12-01

    This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.

  5. Deep sequencing methods for protein engineering and design.

    PubMed

    Wrenbeck, Emily E; Faber, Matthew S; Whitehead, Timothy A

    2017-08-01

    The advent of next-generation sequencing (NGS) has revolutionized protein science, and the development of complementary methods enabling NGS-driven protein engineering have followed. In general, these experiments address the functional consequences of thousands of protein variants in a massively parallel manner using genotype-phenotype linked high-throughput functional screens followed by DNA counting via deep sequencing. We highlight the use of information rich datasets to engineer protein molecular recognition. Examples include the creation of multiple dual-affinity Fabs targeting structurally dissimilar epitopes and engineering of a broad germline-targeted anti-HIV-1 immunogen. Additionally, we highlight the generation of enzyme fitness landscapes for conducting fundamental studies of protein behavior and evolution. We conclude with discussion of technological advances. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. An Exact Efficiency Formula for Holographic Heat Engines

    DOE PAGES

    Johnson, Clifford

    2016-03-31

    Further consideration is given to the efficiency of a class of black hole heat engines that perform mechanical work via the pdV terms present in the First Law of extended gravitational thermodynamics. It is noted that, when the engine cycle is a rectangle with sides parallel to the (p,V) axes, the efficiency can be written simply in terms of the mass of the black hole evaluated at the corners. Since an arbitrary cycle can be approximated to any desired accuracy by a tiling of rectangles, a general geometrical algorithm for computing the efficiency of such a cycle follows. Finally, amore » simple generalization of the algorithm renders it applicable to broader classes of heat engine, even beyond the black hole context.« less

  7. Computational structural mechanics for engine structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1989-01-01

    The computational structural mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures. It is structured to mainly supplement, complement, and whenever possible replace, costly experimental efforts which are unavoidable during engineering research and development programs. Specific objectives include: investigate unique advantages of parallel and multiprocesses for: reformulating/solving structural mechanics and formulating/solving multidisciplinary mechanics and develop integrated structural system computational simulators for: predicting structural performances, evaluating newly developed methods, and for identifying and prioritizing improved/missing methods needed. Herein the CSM program is summarized with emphasis on the Engine Structures Computational Simulator (ESCS). Typical results obtained using ESCS are described to illustrate its versatility.

  8. Mind Games: Game Engines as an Architecture for Intuitive Physics.

    PubMed

    Ullman, Tomer D; Spelke, Elizabeth; Battaglia, Peter; Tenenbaum, Joshua B

    2017-09-01

    We explore the hypothesis that many intuitive physical inferences are based on a mental physics engine that is analogous in many ways to the machine physics engines used in building interactive video games. We describe the key features of game physics engines and their parallels in human mental representation, focusing especially on the intuitive physics of young infants where the hypothesis helps to unify many classic and otherwise puzzling phenomena, and may provide the basis for a computational account of how the physical knowledge of infants develops. This hypothesis also explains several 'physics illusions', and helps to inform the development of artificial intelligence (AI) systems with more human-like common sense. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Understanding the Role of Heat Recirculation in Enhancing the Speed of Premixed Laminar Flames in a Parallel Plate Micro-Combustor

    DTIC Science & Technology

    2009-01-01

    Stuart Antman © Copyright by [Ananthanarayanan Veeraragavan] [2009...Engineering, Univ. Maryland. I thank the other members of my advisory committee (Professors Antman , Marshall, Akin, and Jackson) for their willingness

  10. Suicide and the Internet: Changes in the accessibility of suicide-related information between 2007 and 2014.

    PubMed

    Biddle, Lucy; Derges, Jane; Mars, Becky; Heron, Jon; Donovan, Jenny L; Potokar, John; Piper, Martyn; Wyllie, Clare; Gunnell, David

    2016-01-15

    Following the ongoing concerns about cyber-suicide, we investigate changes between 2007 and 2014 in material likely to be accessed by suicidal individuals searching for methods of suicide. 12 search terms relating to suicide methods were applied to four search engines and the top ten hits from each were categorised and analysed for content. The frequency of each category of site across all searches, using particular search terms and engines, was counted. Key changes: growth of blogs and discussion forums (from 3% of hits, 2007 to 18.5% of hits, 2014); increase in hits linking to general information sites - especially factual sites that detail and evaluate suicide methods (from 9%, 2007 to 21.7%, 2014). Hits for dedicated suicide sites increased (from 19% to 23%), while formal help sites were less visible (from 13% to 6.5%). Overall, 54% of hits contained information about new high-lethality methods. We did not search for help sites so cannot assess the balance of suicide promoting versus preventing sites available online. Social media was beyond the scope of this study. Working with ISPs and search engines would help optimise support sites. Better site moderation and implementation of suicide reporting guidelines should be encouraged. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Torque coordinating robust control of shifting process for dry dual clutch transmission equipped in a hybrid car

    NASA Astrophysics Data System (ADS)

    Zhao, Z.-G.; Chen, H.-J.; Yang, Y.-Y.; He, L.

    2015-09-01

    For a hybrid car equipped with dual clutch transmission (DCT), the coordination control problems of clutches and power sources are investigated while taking full advantage of the integrated starter generator motor's fast response speed and high accuracy (speed and torque). First, a dynamic model of the shifting process is established, the vehicle acceleration is quantified according to the intentions of the driver, and the torque transmitted by clutches is calculated based on the designed disengaging principle during the torque phase. Next, a robust H∞ controller is designed to ensure speed synchronisation despite the existence of model uncertainties, measurement noise, and engine torque lag. The engine torque lag and measurement noise are used as external disturbances to initially modify the output torque of the power source. Additionally, during the torque switch phase, the torque of the power sources is smoothly transitioned to the driver's demanded torque. Finally, the torque of the power sources is further distributed based on the optimisation of system efficiency, and the throttle opening of the engine is constrained to avoid sharp torque variations. The simulation results verify that the proposed control strategies effectively address the problem of coordinating control of clutches and power sources, establishing a foundation for the application of DCT in hybrid cars.

  12. Exploiting the genetic and biochemical capacities of bacteria for the remediation of heavy metal pollution.

    PubMed

    Valls, Marc; de Lorenzo, Víctor

    2002-11-01

    The threat of heavy metal pollution to public health and wildlife has led to an increased interest in developing systems that can remove or neutralise its toxic effects in soil, sediments and wastewater. Unlike organic contaminants, which can be degraded to harmless chemical species, heavy metals cannot be destroyed. Remediating the pollution they cause can therefore only be envisioned as their immobilisation in a non-bioavailable form, or their re-speciation into less toxic forms. While these approaches do not solve the problem altogether, they do help to protect afflicted sites from noxious effects and isolate the contaminants as a contained and sometimes recyclable residue. This review outlines the most important bacterial phenotypes and properties that are (or could be) instrumental in heavy metal bioremediation, along with what is known of their genetic and biochemical background. A variety of instances are discussed in which valuable properties already present in certain strains can be combined or improved through state-of-the-art genetic engineering. In other cases, knowledge of metal-related reactions catalysed by some bacteria allows optimisation of the desired process by altering the physicochemical conditions of the contaminated area. The combination of genetic engineering of the bacterial catalysts with judicious eco-engineering of the polluted sites will be of paramount importance in future bioremediation strategies.

  13. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  14. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  15. A Bayesian Approach for Sensor Optimisation in Impact Identification

    PubMed Central

    Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.

    2016-01-01

    This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064

  16. Optimisation of active suspension control inputs for improved vehicle handling performance

    NASA Astrophysics Data System (ADS)

    Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor

    2016-11-01

    Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.

  17. DryLab® optimised two-dimensional high performance liquid chromatography for differentiation of ephedrine and pseudoephedrine based methamphetamine samples.

    PubMed

    Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A

    2014-11-01

    In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.

  18. A Bayesian formulation of behavioral control.

    PubMed

    Huys, Quentin J M; Dayan, Peter

    2009-12-01

    Helplessness, a belief that the world is not subject to behavioral control, has long been central to our understanding of depression, and has influenced cognitive theories, animal models and behavioral treatments. However, despite its importance, there is no fully accepted definition of helplessness or behavioral control in psychology or psychiatry, and the formal treatments in engineering appear to capture only limited aspects of the intuitive concepts. Here, we formalize controllability in terms of characteristics of prior distributions over affectively charged environments. We explore the relevance of this notion of control to reinforcement learning methods of optimising behavior in such environments and consider how apparently maladaptive beliefs can result from normative inference processes. These results are discussed with reference to depression and animal models thereof.

  19. An animal welfare perspective on animal testing of GMO crops.

    PubMed

    Kolar, Roman; Rusche, Brigitte

    2008-01-01

    The public discussion on the introduction of agro-genetic engineering focuses mainly on economical, ecological and human health aspects. The fact is neglected that laboratory animals must suffer before either humans or the environment are affected. However, numerous animal experiments are conducted for toxicity testing and authorisation of genetically modified plants in the European Union. These are ethically questionable, because death and suffering of the animals for purely commercial purposes are accepted. Therefore, recent political initiatives to further increase animal testing for GMO crops must be regarded highly critically. Based on concrete examples this article demonstrates that animal experiments, on principle, cannot provide the expected protection of users and consumers despite all efforts to standardise, optimise or extend them.

  20. 3D printing process of oxidized nanocellulose and gelatin scaffold.

    PubMed

    Xu, Xiaodong; Zhou, Jiping; Jiang, Yani; Zhang, Qi; Shi, Hongcan; Liu, Dongfang

    2018-08-01

    For tissue engineering applications tissue scaffolds need to have a porous structure to meet the needs of cell proliferation/differentiation, vascularisation and sufficient mechanical strength for the specific tissue. Here we report the results of a study of the 3D printing process for composite materials based on oxidized nanocellulose and gelatin, that was optimised through measuring rheological properties of different batches of materials after different crosslinking times, simulation of the pneumatic extrusion process and 3D scaffolds fabrication with Solidworks Flow Simulation, observation of its porous structure by SEM, measurement of pressure-pull performance, and experiments aimed at finding out the vitro cytotoxicity and cell morphology. The materials printed are highly porous scaffolds with good mechanical properties.

  1. A low cost mid-infrared sensor for on line contamination monitoring of lubricating oils in marine engines

    NASA Astrophysics Data System (ADS)

    Ben Mohammadi, L.; Kullmann, F.; Holzki, M.; Sigloch, S.; Klotzbuecher, T.; Spiesen, J.; Tommingas, T.; Weismann, P.; Kimber, G.

    2010-04-01

    The chemical and physical condition of oils in marine engines must be monitored to ensure optimum performance of the engine and to avoid damage by degraded oil not adequately lubricating the engine. Routine monitoring requires expensive laboratory testing and highly skilled analysts. This work describes the adaptation and implementation of a mid infrared (MIR) sensor module for continued oil condition monitoring in two-stroke and four-stroke diesel engines. The developed sensor module will help to reduce costs in oil analysis by eliminating the need to collect and send samples to a laboratory for analysis. The online MIR-Sensor module measures the contamination of oil with water, soot, as well as the degradation indicated by the TBN (Total Base Number) value. For the analysis of water, TBN, and soot in marine engine oils, four spectral regions of interest have been identified. The optical absorption in these bands correlating with the contaminations is measured simultaneously by using a four-field thermopile detector, combined with appropriate bandpass filters. Recording of the MIR-absorption was performed in a transmission mode using a flow-through cell with appropriate path length. Since in this case no spectrometer is required, the sensor including the light source, the flowthrough- cell, and the detector can be realised at low cost and in a very compact manner. The optical configuration of the sensor with minimal component number and signal intensity optimisation at the four-field detector was implemented by using non-sequential ray tracing simulation. The used calibration model was robust enough to predict accurately the value for soot, water, and TBN concentration for two-stroke and four-stroke engine oils. The sensor device is designed for direct installation on the host engine or machine and, therefore, becoming an integral part of the lubrication system. It can also be used as a portable stand-alone system for machine fluid analysis in the field.

  2. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  3. Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks

    DTIC Science & Technology

    2015-04-01

    UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace

  4. Navigating catastrophes: Local but not global optimisation allows for macro-economic navigation of crises

    NASA Astrophysics Data System (ADS)

    Harré, Michael S.

    2013-02-01

    Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.

  5. An analytical study of hybrid ejector/internal combustion engine-driven heat pumps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, R.W.

    1988-01-01

    Because ejectors can combine high reliability with low maintenance cost in a package requiring little capital investment, they may provide attractive heat pumping capability in situations where the importance of their inefficiencies is minimized. One such concept, a hybrid system in which an ejector driven by engine reject heat is used to increase the performance of an internal combustion engine-driven heat pump, was analyzed by modifying an existing ejector heat pump model and combining it with generic compressor and internal combustion engine models. Under the model assumptions for nominal cooling mode conditions, the results showed that hybrid systems could providemore » substantial performance augmentation/emdash/up to 17/percent/ increase in system coefficient of performance for a parallel arrangement of an enhanced ejector with the engine-driven compressor. 4 refs., 4 figs., 4 tabs.« less

  6. Optimisation techniques in vaginal cuff brachytherapy.

    PubMed

    Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A

    2009-11-01

    The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.

  7. Lewis Structures Technology, 1988. Volume 2: Structural Mechanics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Lewis Structures Div. performs and disseminates results of research conducted in support of aerospace engine structures. These results have a wide range of applicability to practitioners of structural engineering mechanics beyond the aerospace arena. The engineering community was familiarized with the depth and range of research performed by the division and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive evaluation, constitutive models and experimental capabilities, dynamic systems, fatigue and damage, wind turbines, hot section technology (HOST), aeroelasticity, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics, and structural mechanics computer codes.

  8. Computation of Engine Noise Propagation and Scattering Off an Aircraft

    NASA Technical Reports Server (NTRS)

    Xu, J.; Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a comparison of experimental noise data measured in flight on a two-engine business jet aircraft with Kulite microphones placed on the suction surface of the wing with computational results. Both a time-domain discontinuous Galerkin spectral method and a frequency-domain spectral element method are used to simulate the radiation of the dominant spinning mode from the engine and its reflection and scattering by the fuselage and the wing. Both methods are implemented in computer codes that use the distributed memory model to make use of large parallel architectures. The results show that trends of the noise field are well predicted by both methods.

  9. Effect of preventive (beta blocker) treatment, behavioural migraine management, or their combination on outcomes of optimised acute treatment in frequent migraine: randomised controlled trial.

    PubMed

    Holroyd, Kenneth A; Cottrell, Constance K; O'Donnell, Francis J; Cordingley, Gary E; Drew, Jana B; Carlson, Bruce W; Himawan, Lina

    2010-09-29

    To determine if the addition of preventive drug treatment (β blocker), brief behavioural migraine management, or their combination improves the outcome of optimised acute treatment in the management of frequent migraine. Randomised placebo controlled trial over 16 months from July 2001 to November 2005. Two outpatient sites in Ohio, USA. 232 adults (mean age 38 years; 79% female) with diagnosis of migraine with or without aura according to International Headache Society classification of headache disorders criteria, who recorded at least three migraines with disability per 30 days (mean 5.5 migraines/30 days), during an optimised run-in of acute treatment. Addition of one of four preventive treatments to optimised acute treatment: β blocker (n=53), matched placebo (n=55), behavioural migraine management plus placebo (n=55), or behavioural migraine management plus β blocker (n=69). The primary outcome was change in migraines/30 days; secondary outcomes included change in migraine days/30 days and change in migraine specific quality of life scores. Mixed model analysis showed statistically significant (P≤0.05) differences in outcomes among the four added treatments for both the primary outcome (migraines/30 days) and the two secondary outcomes (change in migraine days/30 days and change in migraine specific quality of life scores). The addition of combined β blocker and behavioural migraine management (-3.3 migraines/30 days, 95% confidence interval -3.2 to -3.5), but not the addition of β blocker alone (-2.1 migraines/30 days, -1.9 to -2.2) or behavioural migraine management alone (-2.2 migraines migraines/30 days, -2.0 to -2.4), improved outcomes compared with optimised acute treatment alone (-2.1 migraines/30 days, -1.9 to -2.2). For a clinically significant (≥50% reduction) in migraines/30 days, the number needed to treat for optimised acute treatment plus combined β blocker and behavioural migraine management was 3.1 compared with optimised acute treatment alone, 2.6 compared with optimised acute treatment plus β blocker, and 3.1 compared with optimised acute treatment plus behavioural migraine management. Results were consistent for the two secondary outcomes, and at both month 10 (the primary endpoint) and month 16. The addition of combined β blocker plus behavioural migraine management, but not the addition of β blocker alone or behavioural migraine management alone, improved outcomes of optimised acute treatment. Combined β blocker treatment and behavioural migraine management may improve outcomes in the treatment of frequent migraine. Clinical trials NCT00910689.

  10. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    NASA Astrophysics Data System (ADS)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.

  11. Rivastigmine in apathetic but dementia and depression-free patients with Parkinson's disease: a double-blind, placebo-controlled, randomised clinical trial.

    PubMed

    Devos, David; Moreau, Caroline; Maltête, David; Lefaucheur, Romain; Kreisler, Alexandre; Eusebio, Alexandre; Defer, Gilles; Ouk, Thavarak; Azulay, Jean-Philippe; Krystkowiak, Pierre; Witjas, Tatiana; Delliaux, Marie; Destée, Alain; Duhamel, Alain; Bordet, Régis; Defebvre, Luc; Dujardin, Kathy

    2014-06-01

    Even with optimal dopaminergic treatments, many patients with Parkinson's disease (PD) are frequently incapacitated by apathy prior to the development of dementia. We sought to establish whether rivastigmine's ability to inhibit acetyl- and butyrylcholinesterases could relieve the symptoms of apathy in dementia-free, non-depressed patients with advanced PD. We performed a multicentre, parallel, double-blind, placebo-controlled, randomised clinical trial (Protocol ID: 2008-002578-36; clinicaltrials.gov reference: NCT00767091) in patients with PD with moderate to severe apathy (despite optimised dopaminergic treatment) and without dementia. Patients from five French university hospitals were randomly assigned 1:1 to rivastigmine (transdermal patch of 9.5 mg/day) or placebo for 6 months. The primary efficacy criterion was the change over time in the Lille Apathy Rating Scale (LARS) score. 101 consecutive patients were screened, 31 were eligible and 16 and 14 participants were randomised into the rivastigmine and placebo groups, respectively. Compared with placebo, rivastigmine improved the LARS score (from -11.5 (-15/-7) at baseline to -20 (-25/-12) after treatment; F(1, 25)=5.2; p=0.031; adjusted size effect: -0.9). Rivastigmine also improved the caregiver burden and instrumental activities of daily living but failed to improve quality of life. No severe adverse events occurred in the rivastigmine group. Rivastigmine may represent a new therapeutic option for moderate to severe apathy in advanced PD patients with optimised dopaminergic treatment and without depression dementia. These findings require confirmation in a larger clinical trial. Our results also confirmed that the presence of apathy can herald a pre-dementia state in PD. Clinicaltrials.gov reference: NCT00767091.

  12. Whole-brain high in-plane resolution fMRI using accelerated EPIK for enhanced characterisation of functional areas at 3T

    PubMed Central

    Yun, Seong Dae

    2017-01-01

    The relatively high imaging speed of EPI has led to its widespread use in dynamic MRI studies such as functional MRI. An approach to improve the performance of EPI, EPI with Keyhole (EPIK), has been previously presented and its use in fMRI was verified at 1.5T as well as 3T. The method has been proven to achieve a higher temporal resolution and smaller image distortions when compared to single-shot EPI. Furthermore, the performance of EPIK in the detection of functional signals was shown to be comparable to that of EPI. For these reasons, we were motivated to employ EPIK here for high-resolution imaging. The method was optimised to offer the highest possible in-plane resolution and slice coverage under the given imaging constraints: fixed TR/TE, FOV and acceleration factors for parallel imaging and partial Fourier techniques. The performance of EPIK was evaluated in direct comparison to the optimised protocol obtained from EPI. The two imaging methods were applied to visual fMRI experiments involving sixteen subjects. The results showed that enhanced spatial resolution with a whole-brain coverage was achieved by EPIK (1.00 mm × 1.00 mm; 32 slices) when compared to EPI (1.25 mm × 1.25 mm; 28 slices). As a consequence, enhanced characterisation of functional areas has been demonstrated in EPIK particularly for relatively small brain regions such as the lateral geniculate nucleus (LGN) and superior colliculus (SC); overall, a significantly increased t-value and activation area were observed from EPIK data. Lastly, the use of EPIK for fMRI was validated with the simulation of different types of data reconstruction methods. PMID:28945780

  13. Analysis and Evaluation of German Attainments and Research in the Liquid Rocket Engine Field. Volume 8. Rocket Engine Control and Safety Circuits

    DTIC Science & Technology

    1951-02-01

    the pressure switch (16) is activated. This causes the-electrical circuit to open valve (11) and start the igniter (17). The nitrogen pressure...activates the pressure switch (11) at approximately 7 psi before it flows through the Injector (9) into the chamber. ATI-85«’󈧕 - -A 11...precluded. Accordingly, pressure switch (11) is inserted in the system in parallel (electrically) with the flow indicator (17), and the circuit may

  14. Thermal Management as a Force Multiplier within the Research, Development, and Engineering Command (RDECOM)

    DTIC Science & Technology

    2012-08-01

    pp. 4–9. 46. Ye, Liang; Tong, Ming Wei; Zeng, Xin Design and Analysis of Multiple Parallel-pass Condensers. International Journal of Refrigeration...we mean energy that has low availability to do work (low exergy ). The closer a system is to the condition of its surroundings in terms of...vehicle with a gasoline internal combustion engine loses 40% of its fuel energy through the exhaust gas, which is still at a relatively high

  15. Research in applied mathematics, numerical analysis, and computer science

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  16. Optimal tracers for parallel labeling experiments and 13C metabolic flux analysis: A new precision and synergy scoring system.

    PubMed

    Crown, Scott B; Long, Christopher P; Antoniewicz, Maciek R

    2016-11-01

    13 C-Metabolic flux analysis ( 13 C-MFA) is a widely used approach in metabolic engineering for quantifying intracellular metabolic fluxes. The precision of fluxes determined by 13 C-MFA depends largely on the choice of isotopic tracers and the specific set of labeling measurements. A recent advance in the field is the use of parallel labeling experiments for improved flux precision and accuracy. However, as of today, no systemic methods exist for identifying optimal tracers for parallel labeling experiments. In this contribution, we have addressed this problem by introducing a new scoring system and evaluating thousands of different isotopic tracer schemes. Based on this extensive analysis we have identified optimal tracers for 13 C-MFA. The best single tracers were doubly 13 C-labeled glucose tracers, including [1,6- 13 C]glucose, [5,6- 13 C]glucose and [1,2- 13 C]glucose, which consistently produced the highest flux precision independent of the metabolic flux map (here, 100 random flux maps were evaluated). Moreover, we demonstrate that pure glucose tracers perform better overall than mixtures of glucose tracers. For parallel labeling experiments the optimal isotopic tracers were [1,6- 13 C]glucose and [1,2- 13 C]glucose. Combined analysis of [1,6- 13 C]glucose and [1,2- 13 C]glucose labeling data improved the flux precision score by nearly 20-fold compared to widely use tracer mixture 80% [1- 13 C]glucose +20% [U- 13 C]glucose. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  17. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  18. Program Correctness, Verification and Testing for Exascale (Corvette)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Koushik; Iancu, Costin; Demmel, James W

    The goal of this project is to provide tools to assess the correctness of parallel programs written using hybrid parallelism. There is a dire lack of both theoretical and engineering know-how in the area of finding bugs in hybrid or large scale parallel programs, which our research aims to change. In the project we have demonstrated novel approaches in several areas: 1. Low overhead automated and precise detection of concurrency bugs at scale. 2. Using low overhead bug detection tools to guide speculative program transformations for performance. 3. Techniques to reduce the concurrency required to reproduce a bug using partialmore » program restart/replay. 4. Techniques to provide reproducible execution of floating point programs. 5. Techniques for tuning the floating point precision used in codes.« less

  19. [Master course in biomedical engineering].

    PubMed

    Jobbágy, Akos; Benyó, Zoltán; Monos, Emil

    2009-11-22

    The Bologna Declaration aims at harmonizing the European higher education structure. In accordance with the Declaration, biomedical engineering will be offered as a master (MSc) course also in Hungary, from year 2009. Since 1995 biomedical engineering course has been held in cooperation of three universities: Semmelweis University, Budapest Veterinary University, and Budapest University of Technology and Economics. One of the latter's faculties, Faculty of Electrical Engineering and Informatics, has been responsible for the course. Students could start their biomedical engineering studies - usually in parallel with their first degree course - after they collected at least 180 ECTS credits. Consequently, the biomedical engineering course could have been considered as a master course even before the Bologna Declaration. Students had to collect 130 ECTS credits during the six-semester course. This is equivalent to four-semester full-time studies, because during the first three semesters the curriculum required to gain only one third of the usual ECTS credits. The paper gives a survey on the new biomedical engineering master course, briefly summing up also the subjects in the curriculum.

  20. The limits to biocatalysis: pushing the envelope.

    PubMed

    Sheldon, Roger A; Brady, Dean

    2018-06-12

    In the period 1985 to 1995 applications of biocatalysis, driven by the need for more sustainable manufacture of chemicals and catalytic, (enantio)selective methods for the synthesis of pharmaceutical intermediates, largely involved the available hydrolases. This was followed, in the next two decades, by revolutionary developments in protein engineering and directed evolution for the optimisation of enzyme function and performance that totally changed the biocatalysis landscape. In the same period, metabolic engineering and synthetic biology revolutionised the use of whole cell biocatalysis in the synthesis of commodity chemicals by fermentation. In particular, developments in the enzymatic enantioselective synthesis of chiral alcohols and amines are highlighted. Progress in enzyme immobilisation facilitated applications under harsh industrial conditions, such as in organic solvents. The emergence of biocatalytic or chemoenzymatic cascade processes, often with co-immobilised enzymes, has enabled telescoping of multi-step processes. Discovering and inventing new biocatalytic processes, based on (meta)genomic sequencing, evolving enzyme promiscuity, chemomimetic biocatalysis, artificial metalloenzymes, and the introduction of non-canonical amino acids into proteins, are pushing back the limits of biocatalysis function. Finally, the integral role of biocatalysis in developing a biobased carbon-neutral economy is discussed.

Top