Sample records for core optimization problem

  1. Simultaneous optimization of loading pattern and burnable poison placement for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alim, F.; Ivanov, K.; Yilmaz, S.

    2006-07-01

    To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less

  2. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    NASA Astrophysics Data System (ADS)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.

  3. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  4. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    DOE PAGES

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  5. Parallelization of combinatorial search when solving knapsack optimization problem on computing systems based on multicore processors

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.

  6. Combinatorial optimization games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, X.; Ibaraki, Toshihide; Nagamochi, Hiroshi

    1997-06-01

    We introduce a general integer programming formulation for a class of combinatorial optimization games, which immediately allows us to improve the algorithmic result for finding amputations in the core (an important solution concept in cooperative game theory) of the network flow game on simple networks by Kalai and Zemel. An interesting result is a general theorem that the core for this class of games is nonempty if and only if a related linear program has an integer optimal solution. We study the properties for this mathematical condition to hold for several interesting problems, and apply them to resolve algorithmic andmore » complexity issues for their cores along the line as put forward in: decide whether the core is empty; if the core is empty, find an imputation in the core; given an imputation x, test whether x is in the core. We also explore the properties of totally balanced games in this succinct formulation of cooperative games.« less

  7. Prediction and Optimization of Key Performance Indicators in the Production of Stator Core Using a GA-NN Approach

    NASA Astrophysics Data System (ADS)

    Rajora, M.; Zou, P.; Xu, W.; Jin, L.; Chen, W.; Liang, S. Y.

    2017-12-01

    With the rapidly changing demands of the manufacturing market, intelligent techniques are being used to solve engineering problems due to their ability to handle nonlinear complex problems. For example, in the conventional production of stator cores, it is relied upon experienced engineers to make an initial plan on the number of compensation sheets to be added to achieve uniform pressure distribution throughout the laminations. Additionally, these engineers must use their experience to revise the initial plans based upon the measurements made during the production of stator core. However, this method yields inconsistent results as humans are incapable of storing and analysing large amounts of data. In this article, first, a Neural Network (NN), trained using a hybrid Levenberg-Marquardt (LM) - Genetic Algorithm (GA), is developed to assist the engineers with the decision-making process. Next, the trained NN is used as a fitness function in an optimization algorithm to find the optimal values of the initial compensation sheet plan with the aim of minimizing the required revisions during the production of the stator core.

  8. Luminescence and efficiency optimization of InGaN/GaN core-shell nanowire LEDs by numerical modelling

    NASA Astrophysics Data System (ADS)

    Römer, Friedhard; Deppner, Marcus; Andreev, Zhelio; Kölper, Christopher; Sabathil, Matthias; Strassburg, Martin; Ledig, Johannes; Li, Shunfeng; Waag, Andreas; Witzigmann, Bernd

    2012-02-01

    We present a computational study on the anisotropic luminescence and the efficiency of a core-shell type nanowire LED based on GaN with InGaN active quantum wells. The physical simulator used for analyzing this device integrates a multidimensional drift-diffusion transport solver and a k . p Schrödinger problem solver for quantization effects and luminescence. The solution of both problems is coupled to achieve self-consistency. Using this solver we investigate the effect of dimensions, design of quantum wells, and current injection on the efficiency and luminescence of the core-shell nanowire LED. The anisotropy of the luminescence and re-absorption is analyzed with respect to the external efficiency of the LED. From the results we derive strategies for design optimization.

  9. CMS Readiness for Multi-Core Workload Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less

  10. CMS readiness for multi-core workload scheduling

    NASA Astrophysics Data System (ADS)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  11. Large-scale linear programs in planning and prediction.

    DOT National Transportation Integrated Search

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  12. Five Skills Psychiatrists Should Have in Order to Provide Patients with Optimal Ethical Care

    PubMed Central

    2011-01-01

    Analyses of empirical research and ethical problems require different skills and approaches. This article presents five core skills psychiatrists need to be able to address ethical problems optimally. These include their being able to recognize ethical conflicts and distinguish them from empirical questions, apply all morally relevant values, and know good from bad ethical arguments. Clinical examples of each are provided. PMID:21487542

  13. IceChrono v1: a probabilistic model to compute a common and optimal chronology for several ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, Frédéric

    2015-04-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores is essential to interpret the paleo records that they contain, but it is a complicated problem since it involves different dating methods. Here I present IceChrono v1, a new probabilistic model to combine different kinds of chronological information to obtain a common and optimized chronology for several ice cores, as well as its uncertainty. It is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the vertical thinning function. The chronological information used are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and gas dated horizons, ice and gas dated depth intervals, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air), stratigraphic links in between ice cores (ice-ice, air-air or mix ice-air and air-ice links). The optimization problem is formulated as a least squares problems, that is, all densities of probabilities are assumed gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono is similar in scope to the Datice model, but has differences from the mathematical, numerical and programming point of views. I apply IceChrono on an AICC2012-like experiment and I find similar results than Datice within a few centuries, which is a confirmation of both IceChrono and Datice codes. IceChrono v1 is freely available under the GPL v3 open source license.

  14. GVIPS Models and Software

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Gendy, Atef; Saleeb, Atef F.; Mark, John; Wilt, Thomas E.

    2007-01-01

    Two reports discuss, respectively, (1) the generalized viscoplasticity with potential structure (GVIPS) class of mathematical models and (2) the Constitutive Material Parameter Estimator (COMPARE) computer program. GVIPS models are constructed within a thermodynamics- and potential-based theoretical framework, wherein one uses internal state variables and derives constitutive equations for both the reversible (elastic) and the irreversible (viscoplastic) behaviors of materials. Because of the underlying potential structure, GVIPS models not only capture a variety of material behaviors but also are very computationally efficient. COMPARE comprises (1) an analysis core and (2) a C++-language subprogram that implements a Windows-based graphical user interface (GUI) for controlling the core. The GUI relieves the user of the sometimes tedious task of preparing data for the analysis core, freeing the user to concentrate on the task of fitting experimental data and ultimately obtaining a set of material parameters. The analysis core consists of three modules: one for GVIPS material models, an analysis module containing a specialized finite-element solution algorithm, and an optimization module. COMPARE solves the problem of finding GVIPS material parameters in the manner of a design-optimization problem in which the parameters are the design variables.

  15. Optimization of the coherence function estimation for multi-core central processing unit

    NASA Astrophysics Data System (ADS)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  16. Optimal and Autonomous Control Using Reinforcement Learning: A Survey.

    PubMed

    Kiumarsi, Bahare; Vamvoudakis, Kyriakos G; Modares, Hamidreza; Lewis, Frank L

    2018-06-01

    This paper reviews the current state of the art on reinforcement learning (RL)-based feedback control solutions to optimal regulation and tracking of single and multiagent systems. Existing RL solutions to both optimal and control problems, as well as graphical games, will be reviewed. RL methods learn the solution to optimal control and game problems online and using measured data along the system trajectories. We discuss Q-learning and the integral RL algorithm as core algorithms for discrete-time (DT) and continuous-time (CT) systems, respectively. Moreover, we discuss a new direction of off-policy RL for both CT and DT systems. Finally, we review several applications.

  17. Design and multi-physics optimization of rotary MRF brakes

    NASA Astrophysics Data System (ADS)

    Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan

    2018-03-01

    Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.

  18. A gradient system solution to Potts mean field equations and its electronic implementation.

    PubMed

    Urahama, K; Ueno, S

    1993-03-01

    A gradient system solution method is presented for solving Potts mean field equations for combinatorial optimization problems subject to winner-take-all constraints. In the proposed solution method the optimum solution is searched by using gradient descent differential equations whose trajectory is confined within the feasible solution space of optimization problems. This gradient system is proven theoretically to always produce a legal local optimum solution of combinatorial optimization problems. An elementary analog electronic circuit implementing the presented method is designed on the basis of current-mode subthreshold MOS technologies. The core constituent of the circuit is the winner-take-all circuit developed by Lazzaro et al. Correct functioning of the presented circuit is exemplified with simulations of the circuits implementing the scheme for solving the shortest path problems.

  19. A Comparison Study of Stochastic- and Guaranteed- Service Approaches on Safety Stock Optimization for Multi Serial Systems

    NASA Astrophysics Data System (ADS)

    Li, Peng; Wu, Di

    2018-01-01

    Two competing approaches have been developed over the years for multi-echelon inventory system optimization, stochastic-service approach (SSA) and guaranteed-service approach (GSA). Although they solve the same inventory policy optimization problem in their core, they make different assumptions with regard to the role of safety stock. This paper provides a detailed comparison of the two approaches by considering operating flexibility costs in the optimization of (R, Q) policies for a continuous review serial inventory system. The results indicate the GSA model is more efficiency in solving the complicated inventory problem in terms of the computation time, and the cost difference of the two approaches is quite small.

  20. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1987-01-01

    Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.

  1. Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli

    2018-01-01

    In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.

  2. A common and optimized age scale for Antarctic ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, F.; Veres, D.; Landais, A.; Bazin, L.; Lemieux-Dudon, B.; Toye Mahamadou Kele, H.; Wolff, E.; Martinerie, P.

    2012-04-01

    Dating ice cores is a complex problem because 1) there is a age shift between the gas bubbles and the surrounding ice 2) there are many different ice cores which can be synchronized with various proxies and 3) there are many methods to date the ice and the gas bubbles, each with advantages and drawbacks. These methods fall into the following categories: 1) Ice flow (for the ice) and firn densification modelling (for the gas bubbles); 2) Comparison of ice core proxies with insolation variations (so-called orbital tuning methods); 3) Comparison of ice core proxies with other well dated archives; 4) Identification of well-dated horizons, such as tephra layers or geomagnetic anomalies. Recently, an new dating tool has been developped (DATICE, Lemieux-Dudon et al., 2010), to take into account all the different dating information into account and produce a common and optimal chronology for ice cores with estimated confidence intervals. In this talk we will review the different dating information for Antarctic ice cores and show how the DATICE tool can be applied.

  3. Fuel management optimization using genetic algorithms and expert knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1996-09-01

    The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.

  4. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP.

    PubMed

    Mohsen, Abdulqader M

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality.

  5. IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, F.; Bazin, L.; Capron, E.; Landais, A.; Lemieux-Dudon, B.; Masson-Delmotte, V.

    2015-05-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age-scale uncertainty are essential to interpret the climate and environmental records that they contain. It is, however, a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the lock-in depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice- and air-dated horizons, ice and air depth intervals with known durations, depth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 (Antarctic ice core chronology) for four Antarctic ice cores and one Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono1 are demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono1 and Datice codes. We also test new functionalities with respect to the original version of Datice: observations as ice intervals with known durations, correlated observations, observations as air intervals with known durations and observations as mixed ice-air stratigraphic links. IceChrono1 is freely available under the General Public License v3 open source license.

  6. Research on the integration of teaching content of core courses in Agro-ecological environmental specialties of higher vocational colleges

    NASA Astrophysics Data System (ADS)

    Chen, Juan; Ma, Guosheng

    2018-02-01

    Curriculum is the means to cultivate higher vocational talents. On the basis of analyzing the core curriculum problems of curriculum reform and Agro-ecological environmental specialties in higher vocational colleges, this paper puts forward the optimization and integration measures of 6 core courses, including “Eco-environment Repair Technology”, “Agro-environmental Management Plan”, “Environmental Engineering Design”, “Environmental Pest Management Technology”, “Agro-chemical Pollution Control Technology”, “Agro-environmental Testing and Analysis”. It integrates the vocational qualification certificate education and professional induction certificate training items, and enhances the adaptability, skills and professionalism of professional core curriculum.

  7. Evaluating Multi-core Architectures through Accelerating the Three-Dimensional Lax–Wendroff Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2014-07-18

    Wave propagation forward modeling is a widely used computational method in oil and gas exploration. The iterative stencil loops in such problems have broad applications in scientific computing. However, executing such loops can be highly time time-consuming, which greatly limits application’s performance and power efficiency. In this paper, we accelerate the forward modeling technique on the latest multi-core and many-core architectures such as Intel Sandy Bridge CPUs, NVIDIA Fermi C2070 GPU, NVIDIA Kepler K20x GPU, and the Intel Xeon Phi Co-processor. For the GPU platforms, we propose two parallel strategies to explore the performance optimization opportunities for our stencil kernels.more » For Sandy Bridge CPUs and MIC, we also employ various optimization techniques in order to achieve the best.« less

  8. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia: development and validation.

    PubMed

    Spoorenberg, Sophie L W; Reijneveld, Sijmen A; Middel, Berrie; Uittenbroek, Ronald J; Kremer, Hubertus P H; Wynia, Klaske

    2015-01-01

    The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. A Delphi study was performed in order to reach consensus (≥70% agreement) on second-level categories from the International Classification of Functioning, Disability and Health (ICF). The Delphi panel comprised 41 older adults, medical and non-medical experts. Content validity of the set was tested in a cross-sectional study including 267 older adults identified as frail or having complex care needs. Consensus was reached for 30 ICF categories in the Delphi study (fourteen Body functions, ten Activities and Participation and six Environmental Factors categories). Content validity of the set was high: the prevalence of all the problems was >10%, except for d530 Toileting. The most frequently reported problems were b710 Mobility of joint functions (70%), b152 Emotional functions (65%) and b455 Exercise tolerance functions (62%). No categories had missing values. The final Geriatric ICF Core Set is a comprehensive and valid set of 29 ICF categories, reflecting the most relevant health-related problems among community-living older adults without dementia. This Core Set may contribute to optimal care provision and support of the older population. Implications for Rehabilitation The Geriatric ICF Core Set may provide a practical tool for gaining an understanding of the relevant health-related problems of community-living older adults without dementia. The Geriatric ICF Core Set may be used in primary care practice as an assessment tool in order to tailor care and support to the needs of older adults. The Geriatric ICF Core Set may be suitable for use in multidisciplinary teams in integrated care settings, since it is based on a broad range of problems in functioning. Professionals should pay special attention to health problems related to mobility and emotional functioning since these are the most prevalent problems in community-living older adults.

  9. Multi-objective optimal design of sandwich panels using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Xiaomei; Jiang, Yiping; Pueh Lee, Heow

    2017-10-01

    In this study, an optimization problem concerning sandwich panels is investigated by simultaneously considering the two objectives of minimizing the panel mass and maximizing the sound insulation performance. First of all, the acoustic model of sandwich panels is discussed, which provides a foundation to model the acoustic objective function. Then the optimization problem is formulated as a bi-objective programming model, and a solution algorithm based on the non-dominated sorting genetic algorithm II (NSGA-II) is provided to solve the proposed model. Finally, taking an example of a sandwich panel that is expected to be used as an automotive roof panel, numerical experiments are carried out to verify the effectiveness of the proposed model and solution algorithm. Numerical results demonstrate in detail how the core material, geometric constraints and mechanical constraints impact the optimal designs of sandwich panels.

  10. IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, Frédéric; Bazin, Lucie; Capron, Emilie; Landais, Amaëlle; Lemieux-Dudon, Bénédicte; Masson-Delmotte, Valérie

    2016-04-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age scale uncertainty are essential to interpret the climate and environmental records that they contain. It is however a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and air dated horizons, ice and air depth intervals with known durations, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 chronology for 4 Antarctic ice cores and 1 Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono is demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono and Datice codes. We also test new functionalities with respect to the original version of Datice: observations as ice intervals with known durations, correlated observations, observations as gas intervals with known durations and observations as mixed ice-air stratigraphic links. IceChrono1 is freely available under the GPL v3 open source license.

  11. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP

    PubMed Central

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality. PMID:27999590

  12. Optimization design of toroidal core for magnetic energy harvesting near power line by considering saturation effect

    NASA Astrophysics Data System (ADS)

    Park, Bumjin; Kim, Dongwook; Park, Jaehyoung; Kim, Kibeom; Koo, Jay; Park, HyunHo; Ahn, Seungyoung

    2018-05-01

    Recently, magnetic energy harvesting technologies have been studied actively for self-sustainable operation of applications around power line. However, magnetic energy harvesting around power lines has the problem of magnetic saturation, which can cause power performance degradation of the harvester. In this paper, optimal design of a toroidal core for magnetic energy harvesters has been proposed with consideration of magnetic saturation near power lines. Using Permeability-H curve and Ampere's circuital law, the optimum dimensional parameters needed to generate induced voltage were analyzed via calculation and simulation. To reflect a real environment, we consider the nonlinear characteristic of the magnetic core material and supply current through a 3-phase distribution panel used in the industry. The effectiveness of the proposed design methodology is verified by experiments in a power distribution panel and takes 60.9 V from power line current of 60 A at 60 Hz.

  13. Adversarial Geospatial Abduction Problems

    DTIC Science & Technology

    2011-01-01

    which is new , shows that #GCD is #P-complete and, moreover, that there is no fully-polynomial random approximation scheme for #GCD unless NP equals the...use L∗ to form a new set of constraints to find a δ-core optimal explanation. We now present these δ-core constraints. Notice that the cardinality...EXBrf (∅, efd), flag1 = true, i = 2 (4) While flag1 (a) new val = cur val + inci (b) If new val > (1 + |L|2 ) · cur val then i. If EXBrf (B ∪ {pi

  14. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  15. Parallel Monotonic Basin Hopping for Low Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    McCarty, Steven L.; McGuire, Melissa L.

    2018-01-01

    Monotonic Basin Hopping has been shown to be an effective method of solving low thrust trajectory optimization problems. This paper outlines an extension to the common serial implementation by parallelizing it over any number of available compute cores. The Parallel Monotonic Basin Hopping algorithm described herein is shown to be an effective way to more quickly locate feasible solutions, and improve locally optimal solutions in an automated way without requiring a feasible initial guess. The increased speed achieved through parallelization enables the algorithm to be applied to more complex problems that would otherwise be impractical for a serial implementation. Low thrust cislunar transfers and a hybrid Mars example case demonstrate the effectiveness of the algorithm. Finally, a preliminary scaling study quantifies the expected decrease in solve time compared to a serial implementation.,

  16. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    PubMed

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  17. Highly Efficient Parallel Multigrid Solver For Large-Scale Simulation of Grain Growth Using the Structural Phase Field Crystal Model

    NASA Astrophysics Data System (ADS)

    Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John

    The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.

  18. Exploring biorthonormal transformations of pair-correlation functions in atomic structure variational calculations

    NASA Astrophysics Data System (ADS)

    Verdebout, S.; Jönsson, P.; Gaigalas, G.; Godefroid, M.; Froese Fischer, C.

    2010-04-01

    Multiconfiguration expansions frequently target valence correlation and correlation between valence electrons and the outermost core electrons. Correlation within the core is often neglected. A large orbital basis is needed to saturate both the valence and core-valence correlation effects. This in turn leads to huge numbers of configuration state functions (CSFs), many of which are unimportant. To avoid the problems inherent to the use of a single common orthonormal orbital basis for all correlation effects in the multiconfiguration Hartree-Fock (MCHF) method, we propose to optimize independent MCHF pair-correlation functions (PCFs), bringing their own orthonormal one-electron basis. Each PCF is generated by allowing single- and double-excitations from a multireference (MR) function. This computational scheme has the advantage of using targeted and optimally localized orbital sets for each PCF. These pair-correlation functions are coupled together and with each component of the MR space through a low dimension generalized eigenvalue problem. Nonorthogonal orbital sets being involved, the interaction and overlap matrices are built using biorthonormal transformation of the coupled basis sets followed by a counter-transformation of the PCF expansions. Applied to the ground state of beryllium, the new method gives total energies that are lower than the ones from traditional complete active space (CAS)-MCHF calculations using large orbital active sets. It is fair to say that we now have the possibility to account for, in a balanced way, correlation deep down in the atomic core in variational calculations.

  19. Developing a Shuffled Complex-Self Adaptive Hybrid Evolution (SC-SAHEL) Framework for Water Resources Management and Water-Energy System Optimization

    NASA Astrophysics Data System (ADS)

    Rahnamay Naeini, M.; Sadegh, M.; AghaKouchak, A.; Hsu, K. L.; Sorooshian, S.; Yang, T.

    2017-12-01

    Meta-Heuristic optimization algorithms have gained a great deal of attention in a wide variety of fields. Simplicity and flexibility of these algorithms, along with their robustness, make them attractive tools for solving optimization problems. Different optimization methods, however, hold algorithm-specific strengths and limitations. Performance of each individual algorithm obeys the "No-Free-Lunch" theorem, which means a single algorithm cannot consistently outperform all possible optimization problems over a variety of problems. From users' perspective, it is a tedious process to compare, validate, and select the best-performing algorithm for a specific problem or a set of test cases. In this study, we introduce a new hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme, and allows users to select the most suitable algorithm tailored to the problem at hand. The concept of SC-SAHEL is to execute different EAs as separate parallel search cores, and let all participating EAs to compete during the course of the search. The newly developed SC-SAHEL algorithm is designed to automatically select, the best performing algorithm for the given optimization problem. This algorithm is rigorously effective in finding the global optimum for several strenuous benchmark test functions, and computationally efficient as compared to individual EAs. We benchmark the proposed SC-SAHEL algorithm over 29 conceptual test functions, and two real-world case studies - one hydropower reservoir model and one hydrological model (SAC-SMA). Results show that the proposed framework outperforms individual EAs in an absolute majority of the test problems, and can provide competitive results to the fittest EA algorithm with more comprehensive information during the search. The proposed framework is also flexible for merging additional EAs, boundary-handling techniques, and sampling schemes, and has good potential to be used in Water-Energy system optimal operation and management.

  20. Optimization of the Brillouin operator on the KNL architecture

    NASA Astrophysics Data System (ADS)

    Dürr, Stephan

    2018-03-01

    Experiences with optimizing the matrix-times-vector application of the Brillouin operator on the Intel KNL processor are reported. Without adjustments to the memory layout, performance figures of 360 Gflop/s in single and 270 Gflop/s in double precision are observed. This is with Nc = 3 colors, Nv = 12 right-hand-sides, Nthr = 256 threads, on lattices of size 323 × 64, using exclusively OMP pragmas. Interestingly, the same routine performs quite well on Intel Core i7 architectures, too. Some observations on the much harderWilson fermion matrix-times-vector optimization problem are added.

  1. Hybrid Optimization Parallel Search PACKage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  2. Optimization of Wireless Power Transfer Systems Enhanced by Passive Elements and Metasurfaces

    NASA Astrophysics Data System (ADS)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-10-01

    This paper presents a rigorous optimization technique for wireless power transfer (WPT) systems enhanced by passive elements, ranging from simple reflectors and intermedi- ate relays all the way to general electromagnetic guiding and focusing structures, such as metasurfaces and metamaterials. At its core is a convex semidefinite relaxation formulation of the otherwise nonconvex optimization problem, of which tightness and optimality can be confirmed by a simple test of its solutions. The resulting method is rigorous, versatile, and general -- it does not rely on any assumptions. As shown in various examples, it is able to efficiently and reliably optimize such WPT systems in order to find their physical limitations on performance, optimal operating parameters and inspect their working principles, even for a large number of active transmitters and passive elements.

  3. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology

    PubMed Central

    Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.

    2017-01-01

    Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442

  4. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology.

    PubMed

    Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R

    2017-01-01

    We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.

  5. SN-38 loading capacity of hydrophobic polymer blend nanoparticles: formulation, optimization and efficacy evaluation.

    PubMed

    Dimchevska, Simona; Geskovski, Nikola; Petruševski, Gjorgji; Chacorovska, Marina; Popeski-Dimovski, Riste; Ugarkovic, Sonja; Goracinova, Katerina

    2017-03-01

    One of the most important problems in nanoencapsulation of extremely hydrophobic drugs is poor drug loading due to rapid drug crystallization outside the polymer core. The effort to use nanoprecipitation, as a simple one-step procedure with good reproducibility and FDA approved polymers like Poly(lactic-co-glycolic acid) (PLGA) and Polycaprolactone (PCL), will only potentiate this issue. Considering that drug loading is one of the key defining characteristics, in this study we attempted to examine whether the nanoparticle (NP) core composed of two hydrophobic polymers will provide increased drug loading for 7-Ethyl-10-hydroxy-camptothecin (SN-38), relative to NPs prepared using individual polymers. D-optimal design was applied to optimize PLGA/PCL ratio in the polymer blend and the mode of addition of the amphiphilic copolymer Lutrol ® F127 in order to maximize SN-38 loading and obtain NPs with acceptable size for passive tumor targeting. Drug/polymer and polymer/polymer interaction analysis pointed to high degree of compatibility and miscibility among both hydrophobic polymers, providing core configuration with higher drug loading capacity. Toxicity studies outlined the biocompatibility of the blank NPs. Increased in vitro efficacy of drug-loaded NPs compared to the free drug was confirmed by growth inhibition studies using SW-480 cell line. Additionally, the optimized NP formulation showed very promising blood circulation profile with elimination half-time of 7.4 h.

  6. Optimization of Land Use Suitability for Agriculture Using Integrated Geospatial Model and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Mansor, S. B.; Pormanafi, S.; Mahmud, A. R. B.; Pirasteh, S.

    2012-08-01

    In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.

  7. Decision theory, reinforcement learning, and the brain.

    PubMed

    Dayan, Peter; Daw, Nathaniel D

    2008-12-01

    Decision making is a core competence for animals and humans acting and surviving in environments they only partially comprehend, gaining rewards and punishments for their troubles. Decision-theoretic concepts permeate experiments and computational models in ethology, psychology, and neuroscience. Here, we review a well-known, coherent Bayesian approach to decision making, showing how it unifies issues in Markovian decision problems, signal detection psychophysics, sequential sampling, and optimal exploration and discuss paradigmatic psychological and neural examples of each problem. We discuss computational issues concerning what subjects know about their task and how ambitious they are in seeking optimal solutions; we address algorithmic topics concerning model-based and model-free methods for making choices; and we highlight key aspects of the neural implementation of decision making.

  8. Surgery scheduling optimization considering real life constraints and comprehensive operation cost of operating room.

    PubMed

    Xiang, Wei; Li, Chong

    2015-01-01

    Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.

  9. Parallel computation of GA search for the artery shape determinants with CFD

    NASA Astrophysics Data System (ADS)

    Himeno, M.; Noda, S.; Fukasaku, K.; Himeno, R.

    2010-06-01

    We studied which factors play important role to determine the shape of arteries at the carotid artery bifurcation by performing multi-objective optimization with computation fluid dynamics (CFD) and the genetic algorithm (GA). To perform it, the most difficult problem is how to reduce turn-around time of the GA optimization with 3D unsteady computation of blood flow. We devised two levels of parallel computation method with the following features: level 1: parallel CFD computation with appropriate number of cores; level 2: parallel jobs generated by "master", which finds quickly available job cue and dispatches jobs, to reduce turn-around time. As a result, the turn-around time of one GA trial, which would have taken 462 days with one core, was reduced to less than two days on RIKEN supercomputer system, RICC, with 8192 cores. We performed a multi-objective optimization to minimize the maximum mean WSS and to minimize the sum of circumference for four different shapes and obtained a set of trade-off solutions for each shape. In addition, we found that the carotid bulb has the feature of the minimum local mean WSS and minimum local radius. We confirmed that our method is effective for examining determinants of artery shapes.

  10. DOMe: A deduplication optimization method for the NewSQL database backups

    PubMed Central

    Wang, Longxiang; Zhu, Zhengdong; Zhang, Xingjun; Wang, Yinfeng

    2017-01-01

    Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication method is not optimized for the NewSQL server system and cannot take full advantage of hardware resources to optimize deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to optimize the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication optimization method (DOMe) for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication method. The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1) DOMe can reduce the duplicated NewSQL backup data. 2) DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3) DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index optimization method. PMID:29049307

  11. On-the-Job Orientation of Unemployed Negro Skill Center Trainees and Their Supervisors. Final Report.

    ERIC Educational Resources Information Center

    Rosen, Hjalmar

    The problems inherent in employing hard-core unemployed Negroes and the optimal locus of on-the-job orientation to integrate such employees into thework force were subjects of this study. It focused on young Negro females who, because of their inability to meet selections minimums for job entry, had a high potential for chronic unemployment. Among…

  12. Essays on variational approximation techniques for stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Deride Silva, Julio A.

    This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.

  13. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  14. Identification of multi-criteria for supplier selection in IT project outsourcing

    NASA Astrophysics Data System (ADS)

    Fusiripong, Prashaya; Baharom, Fauziah; Yusof, Yuhanis

    2017-10-01

    In the increasing global business competitiveness, most organizations have attempted to determine the suitable external parties to support their core and non-core competency, particularly, in IT project outsourcing. The IT supplier selection is required to apply multi-criteria which comprised tangible criteria and intangible criteria in consider optimal IT supplier. Most researches attempted to identify optimal criteria for selecting IT supplier, however, the criteria cannot be the considered common criteria support the variety of IT outsourcing. Therefore, the study aimed to identify a common set of criteria being used in the various types of IT outsourcing. The common criteria are constructed by multi-criteria and success criteria, which were collected by literature review with comprehensive and comparative approach. Consequently, the researchers are able to identify a common set of criteria adopted in the variety of selection problem IT outsourcing supplier.

  15. A mathematical framework for yield (vs. rate) optimization in constraint-based modeling and applications in metabolic engineering.

    PubMed

    Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen

    2018-05-01

    The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. A minimal cost function method for optimizing the age-Depth relation of deep-sea sediment cores

    NASA Astrophysics Data System (ADS)

    Brüggemann, Wolfgang

    1992-08-01

    The question of an optimal age-depth relation for deep-sea sediment cores has been raised frequently. The data from such cores (e.g., δ18O values) are used to test the astronomical theory of ice ages as established by Milankovitch in 1938. In this work, we use a minimal cost function approach to find simultaneously an optimal age-depth relation and a linear model that optimally links solar insolation or other model input with global ice volume. Thus a general tool for the calibration of deep-sea cores to arbitrary tuning targets is presented. In this inverse modeling type approach, an objective function is minimized that penalizes: (1) the deviation of the data from the theoretical linear model (whose transfer function can be computed analytically for a given age-depth relation) and (2) the violation of a set of plausible assumptions about the model, the data and the obtained correction of a first guess age-depth function. These assumptions have been suggested before but are now quantified and incorporated explicitly into the objective function as penalty terms. We formulate an optimization problem that is solved numerically by conjugate gradient type methods. Using this direct approach, we obtain high coherences in the Milankovitch frequency bands (over 90%). Not only the data time series but also the the derived correction to a first guess linear age-depth function (and therefore the sedimentation rate) itself contains significant energy in a broad frequency band around 100 kyr. The use of a sedimentation rate which varies continuously on ice age time scales results in a shift of energy from 100 kyr in the original data spectrum to 41, 23, and 19 kyr in the spectrum of the corrected data. However, a large proportion of the data variance remains unexplained, particularly in the 100 kyr frequency band, where there is no significant input by orbital forcing. The presented method is applied to a real sediment core and to the SPECMAP stack, and results are compared with those obtained in earlier investigations.

  17. Adaptive bi-level programming for optimal gene knockouts for targeted overproduction under phenotypic constraints

    PubMed Central

    2013-01-01

    Background Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. Methods In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Results Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies. PMID:23368729

  18. Adaptive bi-level programming for optimal gene knockouts for targeted overproduction under phenotypic constraints.

    PubMed

    Ren, Shaogang; Zeng, Bo; Qian, Xiaoning

    2013-01-01

    Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies.

  19. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design

    PubMed Central

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-01-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589

  20. A Review on Medical Image Registration as an Optimization Problem

    PubMed Central

    Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin

    2017-01-01

    Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration. PMID:28845149

  1. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  2. An Introduction to Kristof's Theorem for Solving Least-Square Optimization Problems Without Calculus.

    PubMed

    Waller, Niels

    2018-01-01

    Kristof's Theorem (Kristof, 1970 ) describes a matrix trace inequality that can be used to solve a wide-class of least-square optimization problems without calculus. Considering its generality, it is surprising that Kristof's Theorem is rarely used in statistics and psychometric applications. The underutilization of this method likely stems, in part, from the mathematical complexity of Kristof's ( 1964 , 1970 ) writings. In this article, I describe the underlying logic of Kristof's Theorem in simple terms by reviewing four key mathematical ideas that are used in the theorem's proof. I then show how Kristof's Theorem can be used to provide novel derivations to two cognate models from statistics and psychometrics. This tutorial includes a glossary of technical terms and an online supplement with R (R Core Team, 2017 ) code to perform the calculations described in the text.

  3. Adaptive building skin structures

    NASA Astrophysics Data System (ADS)

    Del Grosso, A. E.; Basso, P.

    2010-12-01

    The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.

  4. Scout: high-performance heterogeneous computing made simple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less

  5. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  6. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  7. Safety and core design of large liquid-metal cooled fast breeder reactors

    NASA Astrophysics Data System (ADS)

    Qvist, Staffan Alexander

    In light of the scientific evidence for changes in the climate caused by greenhouse-gas emissions from human activities, the world is in ever more desperate need of new, inexhaustible, safe and clean primary energy sources. A viable solution to this problem is the widespread adoption of nuclear breeder reactor technology. Innovative breeder reactor concepts using liquid-metal coolants such as sodium or lead will be able to utilize the waste produced by the current light water reactor fuel cycle to power the entire world for several centuries to come. Breed & burn (B&B) type fast reactor cores can unlock the energy potential of readily available fertile material such as depleted uranium without the need for chemical reprocessing. Using B&B technology, nuclear waste generation, uranium mining needs and proliferation concerns can be greatly reduced, and after a transitional period, enrichment facilities may no longer be needed. In this dissertation, new passively operating safety systems for fast reactors cores are presented. New analysis and optimization methods for B&B core design have been developed, along with a comprehensive computer code that couples neutronics, thermal-hydraulics and structural mechanics and enables a completely automated and optimized fast reactor core design process. In addition, an experiment that expands the knowledge-base of corrosion issues of lead-based coolants in nuclear reactors was designed and built. The motivation behind the work presented in this thesis is to help facilitate the widespread adoption of safe and efficient fast reactor technology.

  8. A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.

    PubMed

    Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas

    2015-12-01

    Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.

  9. TOC and TRIZ: using a dual-methodological approach to solve a forest harvesting problem

    Treesearch

    Ian Conradie

    2005-01-01

    Although cut-to-length forest harvesting with harvesters and forwarders is hardly used in some parts of the world, it has many advantages over conventional harvesting systems. Research has shown that the core reason for the low adoption of CTL in the southeastern USA is the complexity of the equipment to optimize value recovery. In this paper we delve deeper into this...

  10. Cooperative Scheduling of Imaging Observation Tasks for High-Altitude Airships Based on Propagation Algorithm

    PubMed Central

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  11. Validation of a coupled core-transport, pedestal-structure, current-profile and equilibrium model

    NASA Astrophysics Data System (ADS)

    Meneghini, O.

    2015-11-01

    The first workflow capable of predicting the self-consistent solution to the coupled core-transport, pedestal structure, and equilibrium problems from first-principles and its experimental tests are presented. Validation with DIII-D discharges in high confinement regimes shows that the workflow is capable of robustly predicting the kinetic profiles from on axis to the separatrix and matching the experimental measurements to within their uncertainty, with no prior knowledge of the pedestal height nor of any measurement of the temperature or pressure. Self-consistent coupling has proven to be essential to match the experimental results, and capture the non-linear physics that governs the core and pedestal solutions. In particular, clear stabilization of the pedestal peeling ballooning instabilities by the global Shafranov shift and destabilization by additional edge bootstrap current, and subsequent effect on the core plasma profiles, have been clearly observed and documented. In our model, self-consistency is achieved by iterating between the TGYRO core transport solver (with NEO and TGLF for neoclassical and turbulent flux), and the pedestal structure predicted by the EPED model. A self-consistent equilibrium is calculated by EFIT, while the ONETWO transport package evolves the current profile and calculates the particle and energy sources. The capabilities of such workflow are shown to be critical for the design of future experiments such as ITER and FNSF, which operate in a regime where the equilibrium, the pedestal, and the core transport problems are strongly coupled, and for which none of these quantities can be assumed to be known. Self-consistent core-pedestal predictions for ITER, as well as initial optimizations, will be presented. Supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0012652.

  12. Construction, classification and parametrization of complex Hadamard matrices

    NASA Astrophysics Data System (ADS)

    Szöllősi, Ferenc

    To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.

  13. Applications of Derandomization Theory in Coding

    NASA Astrophysics Data System (ADS)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  14. Dynamic optimization of metabolic networks coupled with gene expression.

    PubMed

    Waldherr, Steffen; Oyarzún, Diego A; Bockmayr, Alexander

    2015-01-21

    The regulation of metabolic activity by tuning enzyme expression levels is crucial to sustain cellular growth in changing environments. Metabolic networks are often studied at steady state using constraint-based models and optimization techniques. However, metabolic adaptations driven by changes in gene expression cannot be analyzed by steady state models, as these do not account for temporal changes in biomass composition. Here we present a dynamic optimization framework that integrates the metabolic network with the dynamics of biomass production and composition. An approximation by a timescale separation leads to a coupled model of quasi-steady state constraints on the metabolic reactions, and differential equations for the substrate concentrations and biomass composition. We propose a dynamic optimization approach to determine reaction fluxes for this model, explicitly taking into account enzyme production costs and enzymatic capacity. In contrast to the established dynamic flux balance analysis, our approach allows predicting dynamic changes in both the metabolic fluxes and the biomass composition during metabolic adaptations. Discretization of the optimization problems leads to a linear program that can be efficiently solved. We applied our algorithm in two case studies: a minimal nutrient uptake network, and an abstraction of core metabolic processes in bacteria. In the minimal model, we show that the optimized uptake rates reproduce the empirical Monod growth for bacterial cultures. For the network of core metabolic processes, the dynamic optimization algorithm predicted commonly observed metabolic adaptations, such as a diauxic switch with a preference ranking for different nutrients, re-utilization of waste products after depletion of the original substrate, and metabolic adaptation to an impending nutrient depletion. These examples illustrate how dynamic adaptations of enzyme expression can be predicted solely from an optimization principle. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. New approach in the evaluation of a fitness program at a worksite.

    PubMed

    Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T

    1999-03-01

    The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.

  16. Particle-in-Cell laser-plasma simulation on Xeon Phi coprocessors

    NASA Astrophysics Data System (ADS)

    Surmin, I. A.; Bastrakov, S. I.; Efimenko, E. S.; Gonoskov, A. A.; Korzhimanov, A. V.; Meyerov, I. B.

    2016-05-01

    This paper concerns the development of a high-performance implementation of the Particle-in-Cell method for plasma simulation on Intel Xeon Phi coprocessors. We discuss the suitability of the method for Xeon Phi architecture and present our experience in the porting and optimization of the existing parallel Particle-in-Cell code PICADOR. Direct porting without code modification gives performance on Xeon Phi close to that of an 8-core CPU on a benchmark problem with 50 particles per cell. We demonstrate step-by-step optimization techniques, such as improving data locality, enhancing parallelization efficiency and vectorization leading to an overall 4.2 × speedup on CPU and 7.5 × on Xeon Phi compared to the baseline version. The optimized version achieves 16.9 ns per particle update on an Intel Xeon E5-2660 CPU and 9.3 ns per particle update on an Intel Xeon Phi 5110P. For a real problem of laser ion acceleration in targets with surface grating, where a large number of macroparticles per cell is required, the speedup of Xeon Phi compared to CPU is 1.6 ×.

  17. Integrated fusion simulation with self-consistent core-pedestal coupling

    DOE PAGES

    Meneghini, O.; Snyder, P. B.; Smith, S. P.; ...

    2016-04-20

    In this study, accurate prediction of fusion performance in present and future tokamaks requires taking into account the strong interplay between core transport, pedestal structure, current profile and plasma equilibrium. An integrated modeling workflow capable of calculating the steady-state self- consistent solution to this strongly-coupled problem has been developed. The workflow leverages state-of-the-art components for collisional and turbulent core transport, equilibrium and pedestal stability. Validation against DIII-D discharges shows that the workflow is capable of robustly pre- dicting the kinetic profiles (electron and ion temperature and electron density) from the axis to the separatrix in good agreement with the experiments.more » An example application is presented, showing self-consistent optimization for the fusion performance of the 15 MA D-T ITER baseline scenario as functions of the pedestal density and ion effective charge Z eff.« less

  18. Porous Core-Shell Nanostructures for Catalytic Applications

    NASA Astrophysics Data System (ADS)

    Ewers, Trevor David

    Porous core-shell nanostructures have recently received much attention for their enhanced thermal stability. They show great potential in the field of catalysis, as reactant gases can diffuse in and out of the porous shell while the core particle is protected from sintering, a process in which particles coalesce to form larger particles. Sintering is a large problem in industry and is the primary cause of irreversible deactivation. Despite the obvious advantages of high thermal stability, porous core-shell nanoparticles can be developed to have additional interactive properties from the combination of the core and shell together, rather than just the core particle alone. This dissertation focuses on developing new porous core-shell systems in which both the core and shell take part in catalysis. Two types of systems are explored; (1) yolk-shell nanostructures with reducible oxide shells formed using the Kirkendall effect and (2) ceramic-based porous oxide shells formed using sol-gel chemistry. Of the Kirkendall-based systems, Au FexOy and Cu CoO were synthesized and studied for catalytic applications. Additionally, ZnO was explored as a potential shelling material. Sol-gel work focused on optimizing synthetic methods to allow for coating of small gold particles, which remains a challenge today. Mixed metal oxides were explored as a shelling material to make dual catalysts in which the product of a reaction on the core particle becomes a reactant within the shell.

  19. Competitive repetition suppression (CoRe) clustering: a biologically inspired learning model with application to robust clustering.

    PubMed

    Bacciu, Davide; Starita, Antonina

    2008-11-01

    Determining a compact neural coding for a set of input stimuli is an issue that encompasses several biological memory mechanisms as well as various artificial neural network models. In particular, establishing the optimal network structure is still an open problem when dealing with unsupervised learning models. In this paper, we introduce a novel learning algorithm, named competitive repetition-suppression (CoRe) learning, inspired by a cortical memory mechanism called repetition suppression (RS). We show how such a mechanism is used, at various levels of the cerebral cortex, to generate compact neural representations of the visual stimuli. From the general CoRe learning model, we derive a clustering algorithm, named CoRe clustering, that can automatically estimate the unknown cluster number from the data without using a priori information concerning the input distribution. We illustrate how CoRe clustering, besides its biological plausibility, posses strong theoretical properties in terms of robustness to noise and outliers, and we provide an error function describing CoRe learning dynamics. Such a description is used to analyze CoRe relationships with the state-of-the art clustering models and to highlight CoRe similitude with rival penalized competitive learning (RPCL), showing how CoRe extends such a model by strengthening the rival penalization estimation by means of loss functions from robust statistics.

  20. Performance Evaluation of NWChem Ab-Initio Molecular Dynamics (AIMD) Simulations on the Intel® Xeon Phi™ Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.; Jacquelin, Mathias; De Jong, Wibe A.

    2017-10-20

    Ab-initio Molecular Dynamics (AIMD) methods are an important class of algorithms, as they enable scientists to understand the chemistry and dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. Many-core architectures such as the Intel® Xeon Phi™ processor are an interesting and promising target for these algorithms, as they can provide the computational power that is needed to solve interesting problems in chemistry. In this paper, we describe the efforts of refactoring the existing AIMD plane-wave method of NWChem from an MPI-only implementation to a scalable, hybrid code that employs MPI and OpenMP tomore » exploit the capabilities of current and future many-core architectures. We describe the optimizations required to get close to optimal performance for the multiplication of the tall-and-skinny matrices that form the core of the computational algorithm. We present strong scaling results on the complete AIMD simulation for a test case that simulates 256 water molecules and that strong-scales well on a cluster of 1024 nodes of Intel Xeon Phi processors. We compare the performance obtained with a cluster of dual-socket Intel® Xeon® E5–2698v3 processors.« less

  1. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    PubMed

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on four GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 min to 14 s. We regard this as an important step toward gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for EIT, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the adjoint method.

  2. Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography

    PubMed Central

    Borsic, A.; Attardo, E. A.; Halter, R. J.

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on 4 GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 minutes to 14 seconds. We regard this as an important step towards gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for Electrical Impedance Tomography, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the Adjoint Method. PMID:23010857

  3. Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems

    PubMed Central

    Fonseca Guerra, Gabriel A.; Furber, Steve B.

    2017-01-01

    Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart. PMID:29311791

  4. An optimization methodology for heterogeneous minor actinides transmutation

    NASA Astrophysics Data System (ADS)

    Kooyman, Timothée; Buiron, Laurent; Rimpault, Gérald

    2018-04-01

    In the case of a closed fuel cycle, minor actinides transmutation can lead to a strong reduction in spent fuel radiotoxicity and decay heat. In the heterogeneous approach, minor actinides are loaded in dedicated targets located at the core periphery so that long-lived minor actinides undergo fission and are turned in shorter-lived fission products. However, such targets require a specific design process due to high helium production in the fuel, high flux gradient at the core periphery and low power production. Additionally, the targets are generally manufactured with a high content in minor actinides in order to compensate for the low flux level at the core periphery. This leads to negative impacts on the fuel cycle in terms of neutron source and decay heat of the irradiated targets, which penalize their handling and reprocessing. In this paper, a simplified methodology for the design of targets is coupled with a method for the optimization of transmutation which takes into account both transmutation performances and fuel cycle impacts. The uncertainties and performances of this methodology are evaluated and shown to be sufficient to carry out scoping studies. An illustration is then made by considering the use of moderating material in the targets, which has a positive impact on the minor actinides consumption but a negative impact both on fuel cycle constraints (higher decay heat and neutron) and on assembly design (higher helium production and lower fuel volume fraction). It is shown that the use of moderating material is an optimal solution of the transmutation problem with regards to consumption and fuel cycle impacts, even when taking geometrical design considerations into account.

  5. Genetic algorithms for protein threading.

    PubMed

    Yadgari, J; Amir, A; Unger, R

    1998-01-01

    Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).

  6. Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores

    PubMed Central

    Kim, Youngmin; Lee, Chan-Gun

    2017-01-01

    In wireless sensor networks (WSNs), sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods. PMID:29240695

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less

  8. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  9. Optimization benefits analysis in production process of fabrication components

    NASA Astrophysics Data System (ADS)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  10. Synthesis of carbon core–shell pore structures and their performance as supercapacitors

    DOE PAGES

    Ariyanto, Teguh; Dyatkin, Boris; Zhang, Gui-Rong; ...

    2015-07-15

    High-power supercapacitors require excellent electrolyte mobility within the pore network and high electrical conductivity for maximum capacitance and efficiency. Achieving high power typically requires sacrificing energy densities, as the latter demands a high specific surface area and narrow porosity that impedes ion transport. Here, we present a novel solution for this optimization problem: a nanostructured core–shell carbonaceous material that exhibits a microporous carbon core surrounded by a mesoporous, graphitic shell. The tunable synthesis parameters yielded a structure that features either a sharp or a gradual transition between the core and shell sections. Electrochemical supercapacitor testing using organic electrolyte revealed thatmore » these novel core–shell materials outperform carbons with homogeneous pore structures. The hybrid core–shell materials showed a combination of good capacitance retention, typical for the carbon present in the shell and high specific capacitance, typical for the core material. These materials achieved power densities in excess of 40 kW kg -1 at energy densities reaching 27 Wh kg -1.« less

  11. Optimization of GRIN lenses coupling system for twin-core fiber interconnection with single core fibers

    NASA Astrophysics Data System (ADS)

    Chen, Gongdai; Deng, Hongchang; Yuan, Libo

    2018-07-01

    We aim at a more compact, flexible, and simpler core-to-fiber coupling approach, optimal combinations of two graded refractive index (GRIN) lenses have been demonstrated for the interconnection between a twin-core single-mode fiber and two single-core single-mode fibers. The optimal two-lens combinations achieve an efficient core-to-fiber separating coupling and allow the fibers and lenses to coaxially assemble. Finally, axial deviations and transverse displacements of the components are discussed, and the latter increases the coupling loss more significantly. The gap length between the two lenses is designed to be fine-tuned to compensate for the transverse displacement, and the good linear compensation relationship contributes to the device manufacturing. This approach has potential applications in low coupling loss and low crosstalk devices without sophisticated alignment and adjustment, and enables the channel separating for multicore fibers.

  12. Wavenumber-extended high-order oscillation control finite volume schemes for multi-dimensional aeroacoustic computations

    NASA Astrophysics Data System (ADS)

    Kim, Sungtae; Lee, Soogab; Kim, Kyu Hong

    2008-04-01

    A new numerical method toward accurate and efficient aeroacoustic computations of multi-dimensional compressible flows has been developed. The core idea of the developed scheme is to unite the advantages of the wavenumber-extended optimized scheme and M-AUSMPW+/MLP schemes by predicting a physical distribution of flow variables more accurately in multi-space dimensions. The wavenumber-extended optimization procedure for the finite volume approach based on the conservative requirement is newly proposed for accuracy enhancement, which is required to capture the acoustic portion of the solution in the smooth region. Furthermore, the new distinguishing mechanism which is based on the Gibbs phenomenon in discontinuity, between continuous and discontinuous regions is introduced to eliminate the excessive numerical dissipation in the continuous region by the restricted application of MLP according to the decision of the distinguishing function. To investigate the effectiveness of the developed method, a sequence of benchmark simulations such as spherical wave propagation, nonlinear wave propagation, shock tube problem and vortex preservation test problem are executed. Also, throughout more realistic shock-vortex interaction and muzzle blast flow problems, the utility of the new method for aeroacoustic applications is verified by comparing with the previous numerical or experimental results.

  13. Using primary care electronic health record data for comparative effectiveness research: experience of data quality assessment and preprocessing in The Netherlands.

    PubMed

    Huang, Yunyu; Voorham, Jaco; Haaijer-Ruskamp, Flora M

    2016-07-01

    Details of data quality and how quality issues were solved have not been reported in published comparative effectiveness studies using electronic health record data. We developed a conceptual framework of data quality assessment and preprocessing and apply it to a study comparing angiotensin-converting enzyme inhibitors with angiotensin receptor blockerss on renal function decline in diabetes patients. The framework establishes a line of thought to identify and act on data issues. The core concept is to evaluate whether data are fit-for-use for research tasks. Possible quality problems are listed through specific signal detections, and verified whether they are true problems. Optimal solutions are selected for the identified problems. This framework can be used in observational studies to improve validity of results.

  14. Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python

    USGS Publications Warehouse

    Laura, Jason R.; Rey, Sergio J.

    2017-01-01

    Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less

  16. Leveraging human decision making through the optimal management of centralized resources

    NASA Astrophysics Data System (ADS)

    Hyden, Paul; McGrath, Richard G.

    2016-05-01

    Combining results from mixed integer optimization, stochastic modeling and queuing theory, we will advance the interdisciplinary problem of efficiently and effectively allocating centrally managed resources. Academia currently fails to address this, as the esoteric demands of each of these large research areas limits work across traditional boundaries. The commercial space does not currently address these challenges due to the absence of a profit metric. By constructing algorithms that explicitly use inputs across boundaries, we are able to incorporate the advantages of using human decision makers. Key improvements in the underlying algorithms are made possible by aligning decision maker goals with the feedback loops introduced between the core optimization step and the modeling of the overall stochastic process of supply and demand. A key observation is that human decision-makers must be explicitly included in the analysis for these approaches to be ultimately successful. Transformative access gives warfighters and mission owners greater understanding of global needs and allows for relationships to guide optimal resource allocation decisions. Mastery of demand processes and optimization bottlenecks reveals long term maximum marginal utility gaps in capabilities.

  17. Rapid indirect trajectory optimization on highly parallel computing architectures

    NASA Astrophysics Data System (ADS)

    Antony, Thomas

    Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical long range weapon system. The techniques used to construct an initial guess from an analytic near-ballistic trajectory and the methods used to formulate the necessary conditions of optimality in a manner that is transparent to the designer are discussed. Various hypothetical mission scenarios that enforce different combinations of initial, terminal, interior point and path constraints demonstrate the rapid construction of complex trajectories without requiring any a-priori insight into the structure of the solutions. Trajectory problems of this kind were previously considered impractical to solve using indirect methods. The performance of the GPU-accelerated solver is found to be 2x--4x faster than MATLAB's bvp4c, even while running on GPU hardware that is five years behind the state-of-the-art.

  18. Accelerating 3D Elastic Wave Equations on Knights Landing based Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Sourouri, Mohammed; Birger Raknes, Espen

    2017-04-01

    In advanced imaging methods like reverse-time migration (RTM) and full waveform inversion (FWI) the elastic wave equation (EWE) is numerically solved many times to create the seismic image or the elastic parameter model update. Thus, it is essential to optimize the solution time for solving the EWE as this will have a major impact on the total computational cost in running RTM or FWI. From a computational point of view applications implementing EWEs are associated with two major challenges. The first challenge is the amount of memory-bound computations involved, while the second challenge is the execution of such computations over very large datasets. So far, multi-core processors have not been able to tackle these two challenges, which eventually led to the adoption of accelerators such as Graphics Processing Units (GPUs). Compared to conventional CPUs, GPUs are densely populated with many floating-point units and fast memory, a type of architecture that has proven to map well to many scientific computations. Despite its architectural advantages, full-scale adoption of accelerators has yet to materialize. First, accelerators require a significant programming effort imposed by programming models such as CUDA or OpenCL. Second, accelerators come with a limited amount of memory, which also require explicit data transfers between the CPU and the accelerator over the slow PCI bus. The second generation of the Xeon Phi processor based on the Knights Landing (KNL) architecture, promises the computational capabilities of an accelerator but require the same programming effort as traditional multi-core processors. The high computational performance is realized through many integrated cores (number of cores and tiles and memory varies with the model) organized in tiles that are connected via a 2D mesh based interconnect. In contrary to accelerators, KNL is a self-hosted system, meaning explicit data transfers over the PCI bus are no longer required. However, like most accelerators, KNL sports a memory subsystem consisting of low-level caches and 16GB of high-bandwidth MCDRAM memory. For capacity computing, up to 400GB of conventional DDR4 memory is provided. Such a strict hierarchical memory layout means that data locality is imperative if the true potential of this product is to be harnessed. In this work, we study a series of optimizations specifically targeting KNL for our EWE based application to reduce the time-to-solution time for the following 3D model sizes in grid points: 1283, 2563 and 5123. We compare the results with an optimized version for multi-core CPUs running on a dual-socket Xeon E5 2680v3 system using OpenMP. Our initial naive implementation on the KNL is roughly 20% faster than the multi-core version, but by using only one thread per core and careful memory placement using the memkind library, we could achieve higher speedups. Additionally, by using the MCDRAM as cache for problem sizes that are smaller than 16 GB further performance improvements were unlocked. Depending on the problem size, our overall results indicate that the KNL based system is approximately 2.2x faster than the 24-core Xeon E5 2680v3 system, with only modest changes to the code.

  19. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  20. Passively stabilized 215-W monolithic CW LMA-fiber laser with innovative transversal mode filter

    NASA Astrophysics Data System (ADS)

    Stutzki, Fabian; Jauregui, Cesar; Voigtländer, Christian; Thomas, Jens U.; Limpert, Jens; Nolte, Stefan; Tünnermann, Andreas

    2010-02-01

    We report on the development of a high power monolithic CW fiber oscillator with an output power of 215 W in a 20μm core diameter few-mode Large Mode Area fiber (LMA). The key parameters for stable operation are reviewed. With these optimizations the root mean square of the output power fluctuations can be reduced to less than 0.5 % on a timescale of 20 s, which represents an improvement of more than a factor 5 over a non-optimized fiber laser. With a real-time measurement of the mode content of the fiber laser it can be shown that the few-mode nature of LMA fibers is the main factor for the residual instability of our optimized fiber laser. The root of the problem is that Fiber Bragg Gratings (FBGs) written in multimode fibers exhibit a multi-peak reflexion spectrum in which each resonance corresponds to a different transversal mode. This reflectivity spectrum stimulates multimode laser operation, which results in power and pointing instabilities due to gain competition between the different transversal modes . To stabilize the temporal and spatial behavior of the laser output, we propose an innovative passive in-fiber transversal mode filter based on modified FBG-Fabry Perot structure. This structure provides different reflectivities to the different transversal modes according to the transversal distribution of their intensity profile. Furthermore, this structure can be completely written into the active fiber using fs-laser pulses. Moreover, this concept scales very well with the fiber core diameter, which implies that there is no performance loss in fibers with even larger cores. In consequence this structure is inherently power scalable and can, therefore, be used in kW-level fiber laser systems.

  1. The development of optimal lightweight truss-core sandwich panels

    NASA Astrophysics Data System (ADS)

    Langhorst, Benjamin Robert

    Sandwich structures effectively provide lightweight stiffness and strength by sandwiching a low-density core between stiff face sheets. The performance of lightweight truss-core sandwich panels is enhanced through the design of novel truss arrangements and the development of methods by which the panels may be optimized. An introduction to sandwich panels is presented along with an overview of previous research of truss-core sandwich panels. Three alternative truss arrangements are developed and their corresponding advantages, disadvantages, and optimization routines are discussed. Finally, performance is investigated by theoretical and numerical methods, and it is shown that the relative structural efficiency of alternative truss cores varies with panel weight and load-carrying capacity. Discrete truss core sandwich panels can be designed to serve bending applications more efficiently than traditional pyramidal truss arrangements at low panel weights and load capacities. Additionally, discrete-truss cores permit the design of heterogeneous cores, which feature unit cells that vary in geometry throughout the panel according to the internal loads present at each unit cell's location. A discrete-truss core panel may be selectively strengthened to more efficiently support bending loads. Future research is proposed and additional areas for lightweight sandwich panel development are explained.

  2. Methodes d'optimisation des parametres 2D du reflecteur dans un reacteur a eau pressurisee

    NASA Astrophysics Data System (ADS)

    Clerc, Thomas

    With a third of the reactors in activity, the Pressurized Water Reactor (PWR) is today the most used reactor design in the world. This technology equips all the 19 EDF power plants. PWRs fit into the category of thermal reactors, because it is mainly the thermal neutrons that contribute to the fission reaction. The pressurized light water is both used as the moderator of the reaction and as the coolant. The active part of the core is composed of uranium, slightly enriched in uranium 235. The reflector is a region surrounding the active core, and containing mostly water and stainless steel. The purpose of the reflector is to protect the vessel from radiations, and also to slow down the neutrons and reflect them into the core. Given that the neutrons participate to the reaction of fission, the study of their behavior within the core is capital to understand the general functioning of how the reactor works. The neutrons behavior is ruled by the transport equation, which is very complex to solve numerically, and requires very long calculation. This is the reason why the core codes that will be used in this study solve simplified equations to approach the neutrons behavior in the core, in an acceptable calculation time. In particular, we will focus our study on the diffusion equation and approximated transport equations, such as SPN or S N equations. The physical properties of the reflector are radically different from those of the fissile core, and this structural change causes important tilt in the neutron flux at the core/reflector interface. This is why it is very important to accurately design the reflector, in order to precisely recover the neutrons behavior over the whole core. Existing reflector calculation techniques are based on the Lefebvre-Lebigot method. This method is only valid if the energy continuum of the neutrons is discretized in two energy groups, and if the diffusion equation is used. The method leads to the calculation of a homogeneous reflector. The aim of this study is to create a computational scheme able to compute the parameters of heterogeneous, multi-group reflectors, with both diffusion and SPN/SN operators. For this purpose, two computational schemes are designed to perform such a reflector calculation. The strategy used in both schemes is to minimize the discrepancies between a power distribution computed with a core code and a reference distribution, which will be obtained with an APOLLO2 calculation based on the method Method Of Characteristics (MOC). In both computational schemes, the optimization parameters, also called control variables, are the diffusion coefficients in each zone of the reflector, for diffusion calculations, and the P-1 corrected macroscopic total cross-sections in each zone of the reflector, for SPN/SN calculations (or correction factors on these parameters). After a first validation of our computational schemes, the results are computed, always by optimizing the fast diffusion coefficient for each zone of the reflector. All the tools of the data assimilation have been used to reflect the different behavior of the solvers in the different parts of the core. Moreover, the reflector is refined in six separated zones, corresponding to the physical structure of the reflector. There will be then six control variables for the optimization algorithms. [special characters omitted]. Our computational schemes are then able to compute heterogeneous, 2-group or multi-group reflectors, using diffusion or SPN/SN operators. The optimization performed reduces the discrepancies distribution between the power computed with the core codes and the reference power. However, there are two main limitations to this study: first the homogeneous modeling of the reflector assemblies doesn't allow to properly describe its physical structure near the core/reflector interface. Moreover, the fissile assemblies are modeled in infinite medium, and this model reaches its limit at the core/reflector interface. These two problems should be tackled in future studies. (Abstract shortened by UMI.).

  3. Shared protection based virtual network mapping in space division multiplexing optical networks

    NASA Astrophysics Data System (ADS)

    Zhang, Huibin; Wang, Wei; Zhao, Yongli; Zhang, Jie

    2018-05-01

    Space Division Multiplexing (SDM) has been introduced to improve the capacity of optical networks. In SDM optical networks, there are multiple cores/modes in each fiber link, and spectrum resources are multiplexed in both frequency and core/modes dimensions. Enabled by network virtualization technology, one SDM optical network substrate can be shared by several virtual networks operators. Similar with point-to-point connection services, virtual networks (VN) also need certain survivability to guard against network failures. Based on customers' heterogeneous requirements on the survivability of their virtual networks, this paper studies the shared protection based VN mapping problem and proposes a Minimum Free Frequency Slots (MFFS) mapping algorithm to improve spectrum efficiency. Simulation results show that the proposed algorithm can optimize SDM optical networks significantly in terms of blocking probability and spectrum utilization.

  4. Design of material management system of mining group based on Hadoop

    NASA Astrophysics Data System (ADS)

    Xia, Zhiyuan; Tan, Zhuoying; Qi, Kuan; Li, Wen

    2018-01-01

    Under the background of persistent slowdown in mining market at present, improving the management level in mining group has become the key link to improve the economic benefit of the mine. According to the practical material management in mining group, three core components of Hadoop are applied: distributed file system HDFS, distributed computing framework Map/Reduce and distributed database HBase. Material management system of mining group based on Hadoop is constructed with the three core components of Hadoop and SSH framework technology. This system was found to strengthen collaboration between mining group and affiliated companies, and then the problems such as inefficient management, server pressure, hardware equipment performance deficiencies that exist in traditional mining material-management system are solved, and then mining group materials management is optimized, the cost of mining management is saved, the enterprise profit is increased.

  5. Promoting the energy structure optimization around Chinese Beijing-Tianjin area by developing biomass energy

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Sun, Du; Wang, Shi-Yu; Zhao, Feng-Qing

    2017-06-01

    In recent years, remarkable achievements in the utilization of biomass energy have been made in China. However, there are still some problems, such as irrational industry layout, immature existing market survival mechanism and lack of core competitiveness. On the basis of investigation and research, some recommendations and strategies are proposed for the development of biomass energy around Chinese Beijing-Tianjin area: scientific planning and precise laying out of biomass industry; rationalizing the relationship between government and enterprises and promoting the establishment of a market-oriented survival mechanism; combining ‘supply side’ with ‘demand side’ to optimize product structure; extending industrial chain to promote industry upgrading and sustainable development; and comprehensive co-ordinating various types of biomass resources and extending product chain to achieve better economic benefits.

  6. Numerical optimization of three-dimensional coils for NSTX-U

    DOE PAGES

    Lazerson, S. A.; Park, J. -K.; Logan, N.; ...

    2015-09-03

    A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capablemore » of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. In conclusion, comparison between error field correction experiments on DIII-D and the optimizer show good agreement.« less

  7. Parallelization of the preconditioned IDR solver for modern multicore computer systems

    NASA Astrophysics Data System (ADS)

    Bessonov, O. A.; Fedoseyev, A. I.

    2012-10-01

    This paper present the analysis, parallelization and optimization approach for the large sparse matrix solver CNSPACK for modern multicore microprocessors. CNSPACK is an advanced solver successfully used for coupled solution of stiff problems arising in multiphysics applications such as CFD, semiconductor transport, kinetic and quantum problems. It employs iterative IDR algorithm with ILU preconditioning (user chosen ILU preconditioning order). CNSPACK has been successfully used during last decade for solving problems in several application areas, including fluid dynamics and semiconductor device simulation. However, there was a dramatic change in processor architectures and computer system organization in recent years. Due to this, performance criteria and methods have been revisited, together with involving the parallelization of the solver and preconditioner using Open MP environment. Results of the successful implementation for efficient parallelization are presented for the most advances computer system (Intel Core i7-9xx or two-processor Xeon 55xx/56xx).

  8. Virtual optical network mapping and core allocation in elastic optical networks using multi-core fibers

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-11-01

    Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.

  9. Transmission loss optimization in acoustic sandwich panels

    NASA Astrophysics Data System (ADS)

    Makris, S. E.; Dym, C. L.; MacGregor Smith, J.

    1986-06-01

    Considering the sound transmission loss (TL) of a sandwich panel as the single objective, different optimization techniques are examined and a sophisticated computer program is used to find the optimum TL. Also, for one of the possible case studies such as core optimization, closed-form expressions are given between TL and the core-design variables for different sets of skins. The significance of these functional relationships lies in the fact that the panel designer can bypass the necessity of using a sophisticated software package in order to assess explicitly the dependence of the TL on core thickness and density.

  10. S-Genius, a universal software platform with versatile inverse problem resolution for scatterometry

    NASA Astrophysics Data System (ADS)

    Fuard, David; Troscompt, Nicolas; El Kalyoubi, Ismael; Soulan, Sébastien; Besacier, Maxime

    2013-05-01

    S-Genius is a new universal scatterometry platform, which gathers all the LTM-CNRS know-how regarding the rigorous electromagnetic computation and several inverse problem solver solutions. This software platform is built to be a userfriendly, light, swift, accurate, user-oriented scatterometry tool, compatible with any ellipsometric measurements to fit and any types of pattern. It aims to combine a set of inverse problem solver capabilities — via adapted Levenberg- Marquard optimization, Kriging, Neural Network solutions — that greatly improve the reliability and the velocity of the solution determination. Furthermore, as the model solution is mainly vulnerable to materials optical properties, S-Genius may be coupled with an innovative material refractive indices determination. This paper will a little bit more focuses on the modified Levenberg-Marquardt optimization, one of the indirect method solver built up in parallel with the total SGenius software coding by yours truly. This modified Levenberg-Marquardt optimization corresponds to a Newton algorithm with an adapted damping parameter regarding the definition domains of the optimized parameters. Currently, S-Genius is technically ready for scientific collaboration, python-powered, multi-platform (windows/linux/macOS), multi-core, ready for 2D- (infinite features along the direction perpendicular to the incident plane), conical, and 3D-features computation, compatible with all kinds of input data from any possible ellipsometers (angle or wavelength resolved) or reflectometers, and widely used in our laboratory for resist trimming studies, etching features characterization (such as complex stack) or nano-imprint lithography measurements for instance. The work about kriging solver, neural network solver and material refractive indices determination is done (or about to) by other LTM members and about to be integrated on S-Genius platform.

  11. Research on an uplink carrier sense multiple access algorithm of large indoor visible light communication networks based on an optical hard core point process.

    PubMed

    Nan, Zhufen; Chi, Xuefen

    2016-12-20

    The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.

  12. An evaluation of MPI message rate on hybrid-core processors

    DOE PAGES

    Barrett, Brian W.; Brightwell, Ron; Grant, Ryan; ...

    2014-11-01

    Power and energy concerns are motivating chip manufacturers to consider future hybrid-core processor designs that may combine a small number of traditional cores optimized for single-thread performance with a large number of simpler cores optimized for throughput performance. This trend is likely to impact the way in which compute resources for network protocol processing functions are allocated and managed. In particular, the performance of MPI match processing is critical to achieving high message throughput. In this paper, we analyze the ability of simple and more complex cores to perform MPI matching operations for various scenarios in order to gain insightmore » into how MPI implementations for future hybrid-core processors should be designed.« less

  13. Minimization of the energy loss of nuclear power plants in case of partial in-core monitoring system failure

    NASA Astrophysics Data System (ADS)

    Zagrebaev, A. M.; Ramazanov, R. N.; Lunegova, E. A.

    2017-01-01

    In this paper we consider the optimization problem minimize of the energy loss of nuclear power plants in case of partial in-core monitoring system failure. It is possible to continuation of reactor operation at reduced power or total replacement of the channel neutron measurements, requiring shutdown of the reactor and the stock of detectors. This article examines the reconstruction of the energy release in the core of a nuclear reactor on the basis of the indications of height sensors. The missing measurement information can be reconstructed by mathematical methods, and replacement of the failed sensors can be avoided. It is suggested that a set of ‘natural’ functions determined by means of statistical estimates obtained from archival data be constructed. The procedure proposed makes it possible to reconstruct the field even with a significant loss of measurement information. Improving the accuracy of the restoration of the neutron flux density in partial loss of measurement information to minimize the stock of necessary components and the associated losses.

  14. Analytic energy gradient of projected Hartree-Fock within projection after variation

    NASA Astrophysics Data System (ADS)

    Uejima, Motoyuki; Ten-no, Seiichiro

    2017-03-01

    We develop a geometrical optimization technique for the projection-after-variation (PAV) scheme of the recently refined projected Hartree-Fock (PHF) as a fast alternative to the variation-after-projection (VAP) approach for optimizing the structures of molecules/clusters in symmetry-adapted electronic states at the mean-field computational cost. PHF handles the nondynamic correlation effects by restoring the symmetry of a broken-symmetry single reference wavefunction and moreover enables a black-box treatment of orbital selections. Using HF orbitals instead of PHF orbitals, our approach saves the computational cost for the orbital optimization, avoiding the convergence problem that sometimes emerges in the VAP scheme. We show that PAV-PHF provides geometries comparable to those of the complete active space self-consistent field and VAP-PHF for the tested systems, namely, CH2, O3, and the [Cu2O2 ] 2 + core, where nondynamic correlation is abundant. The proposed approach is useful for large systems mainly dominated by nondynamic correlation to find stable structures in many symmetry-adapted states.

  15. STARS: A general-purpose finite element computer program for analysis of engineering structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1984-01-01

    STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.

  16. Multi channel thermal hydraulic analysis of gas cooled fast reactor using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Drajat, R. Z.; Su'ud, Z.; Soewono, E.; Gunawan, A. Y.

    2012-05-01

    There are three analyzes to be done in the design process of nuclear reactor i.e. neutronic analysis, thermal hydraulic analysis and thermodynamic analysis. The focus in this article is the thermal hydraulic analysis, which has a very important role in terms of system efficiency and the selection of the optimal design. This analysis is performed in a type of Gas Cooled Fast Reactor (GFR) using cooling Helium (He). The heat from nuclear fission reactions in nuclear reactors will be distributed through the process of conduction in fuel elements. Furthermore, the heat is delivered through a process of heat convection in the fluid flow in cooling channel. Temperature changes that occur in the coolant channels cause a decrease in pressure at the top of the reactor core. The governing equations in each channel consist of mass balance, momentum balance, energy balance, mass conservation and ideal gas equation. The problem is reduced to finding flow rates in each channel such that the pressure drops at the top of the reactor core are all equal. The problem is solved numerically with the genetic algorithm method. Flow rates and temperature distribution in each channel are obtained here.

  17. Efficient implementation of the many-body Reactive Bond Order (REBO) potential on GPU

    NASA Astrophysics Data System (ADS)

    Trędak, Przemysław; Rudnicki, Witold R.; Majewski, Jacek A.

    2016-09-01

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.

  18. First principles of Hamiltonian medicine.

    PubMed

    Crespi, Bernard; Foster, Kevin; Úbeda, Francisco

    2014-05-19

    We introduce the field of Hamiltonian medicine, which centres on the roles of genetic relatedness in human health and disease. Hamiltonian medicine represents the application of basic social-evolution theory, for interactions involving kinship, to core issues in medicine such as pathogens, cancer, optimal growth and mental illness. It encompasses three domains, which involve conflict and cooperation between: (i) microbes or cancer cells, within humans, (ii) genes expressed in humans, (iii) human individuals. A set of six core principles, based on these domains and their interfaces, serves to conceptually organize the field, and contextualize illustrative examples. The primary usefulness of Hamiltonian medicine is that, like Darwinian medicine more generally, it provides novel insights into what data will be productive to collect, to address important clinical and public health problems. Our synthesis of this nascent field is intended predominantly for evolutionary and behavioural biologists who aspire to address questions directly relevant to human health and disease.

  19. Bilayer tablets of Paliperidone for Extended release osmotic drug delivery

    NASA Astrophysics Data System (ADS)

    Chowdary, K. Sunil; Napoleon, A. A.

    2017-11-01

    The purpose of this study is to develop and optimize the formulation of paliperidone bilayer tablet core and coating which should meet in vitro performance of trilayered Innovator sample Invega. Optimization of core formulations prepared by different ratio of polyox grades and optimization of coating of (i) sub-coating build-up with hydroxy ethyl cellulose (HEC) and (ii).enteric coating build-up with cellulose acetate (CA). Some important influence factors such as different core tablet compositions and different coating solution ingredients involved in the formulation procedure were investigated. The optimization of formulation and process was conducted by comparing different in vitro release behaviours of Paliperidone. In vitro dissolution studies of Innovator sample (Invega) with formulations of different release rate which ever close release pattern during the whole 24 h test is finalized.

  20. Optimization of 200 MWth and 250 MWt Ship Based Small Long Life NPP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitriyani, Dian; Su'ud, Zaki

    2010-06-22

    Design optimization of ship-based 200 MWth and 250 MWt nuclear power reactors have been performed. The neutronic and thermo-hydraulic programs of the three-dimensional X-Y-Z geometry have been developed for the analysis of ship-based nuclear power plant. Quasi-static approach is adopted to treat seawater effect. The reactor are loop type lead bismuth cooled fast reactor with nitride fuel and with relatively large coolant pipe above reactor core, the heat from primary coolant system is directly transferred to watersteam loop through steam generators. Square core type are selected and optimized. As the optimization result, the core outlet temperature distribution is changing withmore » the elevation angle of the reactor system and the characteristics are discussed.« less

  1. Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava

    2017-01-01

    For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particlemore » tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.« less

  2. Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs

    NASA Astrophysics Data System (ADS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; Masciovecchio, Mario; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2017-08-01

    For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.

  3. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  4. Research on NGN network control technology

    NASA Astrophysics Data System (ADS)

    Li, WenYao; Zhou, Fang; Wu, JianXue; Li, ZhiGuang

    2004-04-01

    Nowadays NGN (Next Generation Network) is the hotspot for discussion and research in IT section. The NGN core technology is the network control technology. The key goal of NGN is to realize the network convergence and evolution. Referring to overlay network model core on Softswitch technology, circuit switch network and IP network convergence realized. Referring to the optical transmission network core on ASTN/ASON, service layer (i.e. IP layer) and optical transmission convergence realized. Together with the distributing feature of NGN network control technology, on NGN platform, overview of combining Softswitch and ASTN/ASON control technology, the solution whether IP should be the NGN core carrier platform attracts general attention, and this is also a QoS problem on NGN end to end. This solution produces the significant practical meaning on equipment development, network deployment, network design and optimization, especially on realizing present network smooth evolving to the NGN. This is why this paper puts forward the research topic on the NGN network control technology. This paper introduces basics on NGN network control technology, then proposes NGN network control reference model, at the same time describes a realizable network structure of NGN. Based on above, from the view of function realization, NGN network control technology is discussed and its work mechanism is analyzed.

  5. Particle swarm optimization of ascent trajectories of multistage launch vehicles

    NASA Astrophysics Data System (ADS)

    Pontani, Mauro

    2014-02-01

    Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state and costate components, the coast duration, and the upper stage thrust duration. In addition, a simple approach is introduced and successfully applied with the purpose of satisfying exactly the path constraint related to the maximum dynamical pressure in the atmospheric phase. The basic version of the swarming technique, which is used in this research, is extremely simple and easy to program. Nevertheless, the algorithm proves to be capable of yielding the optimal rocket trajectory with a very satisfactory numerical accuracy.

  6. Preparation, process optimization and characterization of core-shell polyurethane/chitosan nanofibers as a potential platform for bioactive scaffolds.

    PubMed

    Maleknia, Laleh; Dilamian, Mandana; Pilehrood, Mohammad Kazemi; Sadeghi-Aliabadi, Hojjat; Hekmati, Amir Houshang

    2018-06-01

    In this paper, polyurethane (PU), chitosan (Cs)/polyethylene oxide (PEO), and core-shell PU/Cs nanofibers were produced at the optimal processing conditions using electrospinning technique. Several methods including SEM, TEM, FTIR, XRD, DSC, TGA and image analysis were utilized to characterize these nanofibrous structures. SEM images exhibited that the core-shell PU/Cs nanofibers were spun without any structural imperfections at the optimized processing conditions. TEM image confirmed the PU/Cs core-shell nanofibers were formed apparently. It that seems the inclusion of Cs/PEO to the shell, did not induce the significant variations in the crystallinity in the core-shell nanofibers. DSC analysis showed that the inclusion of Cs/PEO led to the glass temperature of the composition increased significantly compared to those of neat PU nanofibers. The thermal degradation of core-shell PU/Cs was similar to PU nanofibers degradation due to the higher PU concentration compared to other components. It was hypothesized that the core-shell PU/Cs nanofibers can be used as a potential platform for the bioactive scaffolds in tissue engineering. Further biological tests should be conducted to evaluate this platform as a three dimensional scaffold with the capabilities of releasing the bioactive molecules in a sustained manner.

  7. [Site selection of nature reserve based on the self-learning tabu search algorithm with space-ecology set covering problem: An example from Daiyun Mountain, Southeast China].

    PubMed

    Huang, Jia Hang; Liu, Jin Fu; Lin, Zhi Wei; Zheng, Shi Qun; He, Zhong Sheng; Zhang, Hui Guang; Li, Wen Zhou

    2017-01-01

    Designing the nature reserves is an effective approach to protecting biodiversity. The traditional approaches to designing the nature reserves could only identify the core area for protecting the species without specifying an appropriate land area of the nature reserve. The site selection approaches, which are based on mathematical model, can select part of the land from the planning area to compose the nature reserve and to protect specific species or ecosystem. They are useful approaches to alleviating the contradiction between ecological protection and development. The existing site selection methods do not consider the ecological differences between each unit and has the bottleneck of computational efficiency in optimization algorithm. In this study, we first constructed the ecological value assessment system which was appropriated for forest ecosystem and that was used for calculating ecological value of Daiyun Mountain and for drawing its distribution map. Then, the Ecological Set Covering Problem (ESCP) was established by integrating the ecological values and then the Space-ecology Set Covering Problem (SSCP) was generated based on the spatial compactness of ESCP. Finally, the STS algorithm which possessed good optimizing performance was utilized to search the approximate optimal solution under diverse protection targets, and the optimization solution of the built-up area of Daiyun Mountain was proposed. According to the experimental results, the difference of ecological values in the spatial distribution was obvious. The ecological va-lue of selected sites of ESCP was higher than that of SCP. SSCP could aggregate the sites with high ecological value based on ESCP. From the results, the level of the aggregation increased with the weight of the perimeter. We suggested that the range of the existing reserve could be expanded for about 136 km 2 and the site of Tsuga longibracteata should be included, which was located in the northwest of the study area. Our research aimed at providing an optimization scheme for the sustai-nable development of Daiyun Mountain nature reserve and the optimal allocation of land resource, and a novel idea for designing the nature reserve of forest ecosystem in China.

  8. SCORPIO: A Scalable Two-Phase Parallel I/O Library With Application To A Large Scale Subsurface Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Sripathi, Vamsi; Mills, Richard T

    2013-01-01

    Inefficient parallel I/O is known to be a major bottleneck among scientific applications employed on supercomputers as the number of processor cores grows into the thousands. Our prior experience indicated that parallel I/O libraries such as HDF5 that rely on MPI-IO do not scale well beyond 10K processor cores, especially on parallel file systems (like Lustre) with single point of resource contention. Our previous optimization efforts for a massively parallel multi-phase and multi-component subsurface simulator (PFLOTRAN) led to a two-phase I/O approach at the application level where a set of designated processes participate in the I/O process by splitting themore » I/O operation into a communication phase and a disk I/O phase. The designated I/O processes are created by splitting the MPI global communicator into multiple sub-communicators. The root process in each sub-communicator is responsible for performing the I/O operations for the entire group and then distributing the data to rest of the group. This approach resulted in over 25X speedup in HDF I/O read performance and 3X speedup in write performance for PFLOTRAN at over 100K processor cores on the ORNL Jaguar supercomputer. This research describes the design and development of a general purpose parallel I/O library, SCORPIO (SCalable block-ORiented Parallel I/O) that incorporates our optimized two-phase I/O approach. The library provides a simplified higher level abstraction to the user, sitting atop existing parallel I/O libraries (such as HDF5) and implements optimized I/O access patterns that can scale on larger number of processors. Performance results with standard benchmark problems and PFLOTRAN indicate that our library is able to maintain the same speedups as before with the added flexibility of being applicable to a wider range of I/O intensive applications.« less

  9. Optimized Diffusion of Run-and-Tumble Particles in Crowded Environments

    NASA Astrophysics Data System (ADS)

    Bertrand, Thibault; Zhao, Yongfeng; Bénichou, Olivier; Tailleur, Julien; Voituriez, Raphaël

    2018-05-01

    We study the transport of self-propelled particles in dynamic complex environments. To obtain exact results, we introduce a model of run-and-tumble particles (RTPs) moving in discrete time on a d -dimensional cubic lattice in the presence of diffusing hard-core obstacles. We derive an explicit expression for the diffusivity of the RTP, which is exact in the limit of low density of fixed obstacles. To do so, we introduce a generalization of Kac's theorem on the mean return times of Markov processes, which we expect to be relevant for a large class of lattice gas problems. Our results show the diffusivity of RTPs to be nonmonotonic in the tumbling probability for low enough obstacle mobility. These results prove the potential for the optimization of the transport of RTPs in crowded and disordered environments with applications to motile artificial and biological systems.

  10. Constant Communities in Complex Networks

    NASA Astrophysics Data System (ADS)

    Chakraborty, Tanmoy; Srinivasan, Sriram; Ganguly, Niloy; Bhowmick, Sanjukta; Mukherjee, Animesh

    2013-05-01

    Identifying community structure is a fundamental problem in network analysis. Most community detection algorithms are based on optimizing a combinatorial parameter, for example modularity. This optimization is generally NP-hard, thus merely changing the vertex order can alter their assignments to the community. However, there has been less study on how vertex ordering influences the results of the community detection algorithms. Here we identify and study the properties of invariant groups of vertices (constant communities) whose assignment to communities are, quite remarkably, not affected by vertex ordering. The percentage of constant communities can vary across different applications and based on empirical results we propose metrics to evaluate these communities. Using constant communities as a pre-processing step, one can significantly reduce the variation of the results. Finally, we present a case study on phoneme network and illustrate that constant communities, quite strikingly, form the core functional units of the larger communities.

  11. Firefly Mating Algorithm for Continuous Optimization Problems

    PubMed Central

    Ritthipakdee, Amarita; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima. PMID:28808442

  12. Firefly Mating Algorithm for Continuous Optimization Problems.

    PubMed

    Ritthipakdee, Amarita; Thammano, Arit; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.

  13. The application of dynamic programming in production planning

    NASA Astrophysics Data System (ADS)

    Wu, Run

    2017-05-01

    Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.

  14. Freud, Problem Solving, Ethnicity, and Race: Integrating Psychology into the Interdisciplinary Core Curriculum.

    ERIC Educational Resources Information Center

    Dunn, Dana S.

    The new core curriculum at Moravian College, in Pennsylvania, utilizes an interdisciplinary approach, integrating topics of psychology into three of the seven core courses: "Microcosm/Macrocosm"; "Quantitative Problem Solving"; and the seminar "Gender, Ethnicity, and Race." The course "Microcosm/Macrocosm"…

  15. Multi level optimization of burnable poison utilization for advanced PWR fuel management

    NASA Astrophysics Data System (ADS)

    Yilmaz, Serkan

    The objective of this study was to develop an unique methodology and a practical tool for designing burnable poison (BP) pattern for a given PWR core. Two techniques were studied in developing this tool. First, the deterministic technique called Modified Power Shape Forced Diffusion (MPSFD) method followed by a fine tuning algorithm, based on some heuristic rules, was developed to achieve this goal. Second, an efficient and a practical genetic algorithm (GA) tool was developed and applied successfully to Burnable Poisons (BPs) placement optimization problem for a reference Three Mile Island-1 (TMI-1) core. This thesis presents the step by step progress in developing such a tool. The developed deterministic method appeared to perform as expected. The GA technique produced excellent BP designs. It was discovered that the Beginning of Cycle (BOC) Kinf of a BP fuel assembly (FA) design is a good filter to eliminate invalid BP designs created during the optimization process. By eliminating all BP designs having BOC Kinf above a set limit, the computational time was greatly reduced since the evaluation process with reactor physics calculations for an invalid solution is canceled. Moreover, the GA was applied to develop the BP loading pattern to minimize the total Gadolinium (Gd) amount in the core together with the residual binding at End-of-Cycle (EOC) and to keep the maximum peak pin power during core depletion and Soluble boron concentration at BOC both less than their limit values. The number of UO2/Gd2O3 pins and Gd 2O3 concentrations for each fresh fuel location in the core are the decision variables and the total amount of the Gd in the core and maximum peak pin power during core depletion are in the fitness functions. The use of different fitness function definition and forcing the solution movement towards to desired region in the solution space accelerated the GA runs. Special emphasize is given to minimizing the residual binding to increase core lifetime as well as minimizing the total Gd amount in the core. The GA code developed many good solutions that satisfy all of the design constraints. For these solutions, the EOC soluble boron concentration changes from 68.9 to 97.2 ppm. It is important to note that the difference of 28.3 ppm between the best and the worst solution in the good solutions region represent the potential of 12.5 Effective-Full-Power-Day (EPFD) savings in cycle length. As a comparison, the best BP loading design has 97.2 ppm soluble boron concentration at EOC while the BP loading with available vendors' U/Gd FA designs has 94.4 ppm SOB at EOC. It was estimated that the difference of 2.8 ppm reflected the potential savings of 1.25 EFPD in cycle length. Moreover, the total Gd amount was reduced by 6.89% in mass that provided extra savings in fuel cost compared to the BP loading pattern with available vendor's U/Gd FA designs. (Abstract shortened by UMI.)

  16. Role of CT scanning in formation evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergosh, J.L.; Dibona, B.G.

    1988-01-01

    The use of the computerized tomographic (CT) scanner in formation evaluation of difficult to analyze core samples has moved from the research and development phase to daily, routine use in the core-analysis laboratory. The role of the CT scanner has become increasingly important as geologists try to obtain more representative core material for accurate formation evaluation. The most common problem facing the core analyst when preparing to measure petrophysical properties is the selection of representative and unaltered core samples for routine and special core testing. Recent data have shown that heterogeneous reservoir rock can be very difficult, if not impossible,more » to assess correctly when using standard core examination procedures, because many features, such as fractures, are not visible on the core surface. Another problem is the invasion of drilling mud into the core sample. Flushing formation oil and water from the core can greatly alter the saturation and distribution of fluids and lead to serious formation evaluation problems. Because the quality and usefulness of the core date are directly tied to proper sample selection, it has become imperative that the CT scanner be used whenever possible.« less

  17. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  18. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  19. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  20. Revisiting Intel Xeon Phi optimization of Thompson cloud microphysics scheme in Weather Research and Forecasting (WRF) model

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2015-10-01

    The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.

  1. Analytical methods in the high conversion reactor core design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeggel, W.; Oldekop, W.; Axmann, J.K.

    High conversion reactor (HCR) design methods have been used at the Technical University of Braunschweig (TUBS) with the technological support of Kraftwerk Union (KWU). The present state and objectives of this cooperation between KWU and TUBS in the field of HCRs have been described using existing design models and current activities aimed at further development and validation of the codes. The hard physical and thermal-hydraulic boundary conditions of pressurized water reactor (PWR) cores with a high degree of fuel utilization result from the tight packing of the HCR fuel rods and the high fissionable plutonium content of the fuel. Inmore » terms of design, the problem will be solved with rod bundles whose fuel rods are adjusted by helical spacers to the proposed small rod pitches. These HCR properties require novel computational models for neutron physics, thermal hydraulics, and fuel rod design. By means of a survey of the codes, the analytical procedure for present-day HCR core design is presented. The design programs are currently under intensive development, as design tools with a solid, scientific foundation and with essential parameters that are widely valid and are required for a promising optimization of the HCR core. Design results and a survey of future HCR development are given. In this connection, the reoptimization of the PWR core in the direction of an HCR is considered a fascinating scientific task, with respect to both economic and safety aspects.« less

  2. Creation of Novel Cores for β-Secretase (BACE-1) Inhibitors: A Multiparameter Lead Generation Strategy

    PubMed Central

    2014-01-01

    In order to find optimal core structures as starting points for lead optimization, a multiparameter lead generation workflow was designed with the goal of finding BACE-1 inhibitors as a treatment for Alzheimer’s disease. De novo design of core fragments was connected with three predictive in silico models addressing target affinity, permeability, and hERG activity, in order to guide synthesis. Taking advantage of an additive SAR, the prioritized cores were decorated with a few, well-characterized substituents from known BACE-1 inhibitors in order to allow for core-to-core comparisons. Prediction methods and analyses of how physicochemical properties of the core structures correlate to in vitro data are described. The syntheses and in vitro data of the test compounds are reported in a separate paper by Ginman et al. [J. Med. Chem.2013, 56, 4181–4205]. The affinity predictions are described in detail by Roos et al. [J. Chem. Inf.2014, DOI: 10.1021/ci400374z]. PMID:24900855

  3. Creation of Novel Cores for β-Secretase (BACE-1) Inhibitors: A Multiparameter Lead Generation Strategy.

    PubMed

    Viklund, Jenny; Kolmodin, Karin; Nordvall, Gunnar; Swahn, Britt-Marie; Svensson, Mats; Gravenfors, Ylva; Rahm, Fredrik

    2014-04-10

    In order to find optimal core structures as starting points for lead optimization, a multiparameter lead generation workflow was designed with the goal of finding BACE-1 inhibitors as a treatment for Alzheimer's disease. De novo design of core fragments was connected with three predictive in silico models addressing target affinity, permeability, and hERG activity, in order to guide synthesis. Taking advantage of an additive SAR, the prioritized cores were decorated with a few, well-characterized substituents from known BACE-1 inhibitors in order to allow for core-to-core comparisons. Prediction methods and analyses of how physicochemical properties of the core structures correlate to in vitro data are described. The syntheses and in vitro data of the test compounds are reported in a separate paper by Ginman et al. [J. Med. Chem. 2013, 56, 4181-4205]. The affinity predictions are described in detail by Roos et al. [J. Chem. Inf. 2014, DOI: 10.1021/ci400374z].

  4. Bioprinting Using Mechanically Robust Core-Shell Cell-Laden Hydrogel Strands.

    PubMed

    Mistry, Pritesh; Aied, Ahmed; Alexander, Morgan; Shakesheff, Kevin; Bennett, Andrew; Yang, Jing

    2017-06-01

    The strand material in extrusion-based bioprinting determines the microenvironments of the embedded cells and the initial mechanical properties of the constructs. One unmet challenge is the combination of optimal biological and mechanical properties in bioprinted constructs. Here, a novel bioprinting method that utilizes core-shell cell-laden strands with a mechanically robust shell and an extracellular matrix-like core has been developed. Cells encapsulated in the strands demonstrate high cell viability and tissue-like functions during cultivation. This process of bioprinting using core-shell strands with optimal biochemical and biomechanical properties represents a new strategy for fabricating functional human tissues and organs. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Development of Lipid-Shell and Polymer Core Nanoparticles with Water-Soluble Salidroside for Anti-Cancer Therapy

    PubMed Central

    Fang, Dai-Long; Chen, Yan; Xu, Bei; Ren, Ke; He, Zhi-Yao; He, Li-Li; Lei, Yi; Fan, Chun-Mei; Song, Xiang-Rong

    2014-01-01

    Salidroside (Sal) is a potent antitumor drug with high water-solubility. The clinic application of Sal in cancer therapy has been significantly restricted by poor oral absorption and low tumor cell uptake. To solve this problem, lipid-shell and polymer-core nanoparticles (Sal-LPNPs) loaded with Sal were developed by a double emulsification method. The processing parameters including the polymer types, organic phase, PVA types and amount were systemically investigated. The obtained optimal Sal-LPNPs, composed of PLGA-PEG-PLGA triblock copolymers and lipids, had high entrapment efficiency (65%), submicron size (150 nm) and negatively charged surface (−23 mV). DSC analysis demonstrated the successful encapsulation of Sal into LPNPs. The core-shell structure of Sal-LPNPs was verified by TEM. Sal released slowly from the LPNPs without apparent burst release. MTT assay revealed that 4T1 and PANC-1 cancer cell lines were sensitive to Sal treatment. Sal-LPNPs had significantly higher antitumor activities than free Sal in 4T1 and PANC-1 cells. The data indicate that LPNPs are a promising Sal vehicle for anti-cancer therapy and worthy of further investigation. PMID:24573250

  6. Development of lipid-shell and polymer core nanoparticles with water-soluble salidroside for anti-cancer therapy.

    PubMed

    Fang, Dai-Long; Chen, Yan; Xu, Bei; Ren, Ke; He, Zhi-Yao; He, Li-Li; Lei, Yi; Fan, Chun-Mei; Song, Xiang-Rong

    2014-02-25

    Salidroside (Sal) is a potent antitumor drug with high water-solubility. The clinic application of Sal in cancer therapy has been significantly restricted by poor oral absorption and low tumor cell uptake. To solve this problem, lipid-shell and polymer-core nanoparticles (Sal-LPNPs) loaded with Sal were developed by a double emulsification method. The processing parameters including the polymer types, organic phase, PVA types and amount were systemically investigated. The obtained optimal Sal-LPNPs, composed of PLGA-PEG-PLGA triblock copolymers and lipids, had high entrapment efficiency (65%), submicron size (150 nm) and negatively charged surface (-23 mV). DSC analysis demonstrated the successful encapsulation of Sal into LPNPs. The core-shell structure of Sal-LPNPs was verified by TEM. Sal released slowly from the LPNPs without apparent burst release. MTT assay revealed that 4T1 and PANC-1 cancer cell lines were sensitive to Sal treatment. Sal-LPNPs had significantly higher antitumor activities than free Sal in 4T1 and PANC-1 cells. The data indicate that LPNPs are a promising Sal vehicle for anti-cancer therapy and worthy of further investigation.

  7. Facilitating Teamwork in Adolescent and Young Adult Oncology

    PubMed Central

    Macpherson, Catherine Fiona; Smith, Ashley W.; Block, Rebecca G.; Keyton, Joann

    2016-01-01

    A case of a young adult patient in the days immediately after a cancer diagnosis illustrates the critical importance of three interrelated core coordinating mechanisms—closed-loop communication, shared mental models, and mutual trust—of teamwork in an adolescent and young adult multidisciplinary oncology team. The case illustrates both the opportunities to increase team member coordination and the problems that can occur when coordination breaks down. A model for teamwork is presented, which highlights the relationships among these coordinating mechanisms and demonstrates how balance among them works to optimize team function and patient care. Implications for clinical practice and research suggested by the case are presented. PMID:27624944

  8. A discrete mechanics approach to dislocation dynamics in BCC crystals

    NASA Astrophysics Data System (ADS)

    Ramasubramaniam, A.; Ariza, M. P.; Ortiz, M.

    2007-03-01

    A discrete mechanics approach to modeling the dynamics of dislocations in BCC single crystals is presented. Ideas are borrowed from discrete differential calculus and algebraic topology and suitably adapted to crystal lattices. In particular, the extension of a crystal lattice to a CW complex allows for convenient manipulation of forms and fields defined over the crystal. Dislocations are treated within the theory as energy-minimizing structures that lead to locally lattice-invariant but globally incompatible eigendeformations. The discrete nature of the theory eliminates the need for regularization of the core singularity and inherently allows for dislocation reactions and complicated topological transitions. The quantization of slip to integer multiples of the Burgers' vector leads to a large integer optimization problem. A novel approach to solving this NP-hard problem based on considerations of metastability is proposed. A numerical example that applies the method to study the emanation of dislocation loops from a point source of dilatation in a large BCC crystal is presented. The structure and energetics of BCC screw dislocation cores, as obtained via the present formulation, are also considered and shown to be in good agreement with available atomistic studies. The method thus provides a realistic avenue for mesoscale simulations of dislocation based crystal plasticity with fully atomistic resolution.

  9. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  10. Problem solving as a core strategy in the prevention of schizophrenia and other mental disorders.

    PubMed

    Falloon, I R

    2000-11-01

    To outline the rationale for implementing training in structured problem solving as a primary prevention strategy for major mental disorders. The evidence that training people in a structured method of solving their personal problems is an effective strategy in the treatment of established cases of schizophrenic and major mood disorders, is selectively reviewed. Most of the relevant research focused on the prevention of major recurrent episodes of psychosis. There is some evidence to support the hypothesis that this strategy may assist many people to achieve a full and sustained recovery from the clinical and social impairments of these disorders, especially when patients are taught to use structured problem solving with members of their personal resource groups, and they continue to take optimal doses of psychoactive medication. There is support for the hypothesis that the key therapeutic factor associated with these benefits is the improved efficiency of the management of life stress. The simplicity of problem solving, the educational methods used, and the widespread application to a person's lifestyle would appear to make this a possible candidate for a primary prevention program for major mental disorders. Guidebooks and teaching aids have been developed and show excellent consumer acceptance.

  11. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  12. Graphics Processing Unit–Enhanced Genetic Algorithms for Solving the Temporal Dynamics of Gene Regulatory Networks

    PubMed Central

    García-Calvo, Raúl; Guisado, JL; Diaz-del-Rio, Fernando; Córdoba, Antonio; Jiménez-Morales, Francisco

    2018-01-01

    Understanding the regulation of gene expression is one of the key problems in current biology. A promising method for that purpose is the determination of the temporal dynamics between known initial and ending network states, by using simple acting rules. The huge amount of rule combinations and the nonlinear inherent nature of the problem make genetic algorithms an excellent candidate for finding optimal solutions. As this is a computationally intensive problem that needs long runtimes in conventional architectures for realistic network sizes, it is fundamental to accelerate this task. In this article, we study how to develop efficient parallel implementations of this method for the fine-grained parallel architecture of graphics processing units (GPUs) using the compute unified device architecture (CUDA) platform. An exhaustive and methodical study of various parallel genetic algorithm schemes—master-slave, island, cellular, and hybrid models, and various individual selection methods (roulette, elitist)—is carried out for this problem. Several procedures that optimize the use of the GPU’s resources are presented. We conclude that the implementation that produces better results (both from the performance and the genetic algorithm fitness perspectives) is simulating a few thousands of individuals grouped in a few islands using elitist selection. This model comprises 2 mighty factors for discovering the best solutions: finding good individuals in a short number of generations, and introducing genetic diversity via a relatively frequent and numerous migration. As a result, we have even found the optimal solution for the analyzed gene regulatory network (GRN). In addition, a comparative study of the performance obtained by the different parallel implementations on GPU versus a sequential application on CPU is carried out. In our tests, a multifold speedup was obtained for our optimized parallel implementation of the method on medium class GPU over an equivalent sequential single-core implementation running on a recent Intel i7 CPU. This work can provide useful guidance to researchers in biology, medicine, or bioinformatics in how to take advantage of the parallelization on massively parallel devices and GPUs to apply novel metaheuristic algorithms powered by nature for real-world applications (like the method to solve the temporal dynamics of GRNs). PMID:29662297

  13. Graphics Processing Unit-Enhanced Genetic Algorithms for Solving the Temporal Dynamics of Gene Regulatory Networks.

    PubMed

    García-Calvo, Raúl; Guisado, J L; Diaz-Del-Rio, Fernando; Córdoba, Antonio; Jiménez-Morales, Francisco

    2018-01-01

    Understanding the regulation of gene expression is one of the key problems in current biology. A promising method for that purpose is the determination of the temporal dynamics between known initial and ending network states, by using simple acting rules. The huge amount of rule combinations and the nonlinear inherent nature of the problem make genetic algorithms an excellent candidate for finding optimal solutions. As this is a computationally intensive problem that needs long runtimes in conventional architectures for realistic network sizes, it is fundamental to accelerate this task. In this article, we study how to develop efficient parallel implementations of this method for the fine-grained parallel architecture of graphics processing units (GPUs) using the compute unified device architecture (CUDA) platform. An exhaustive and methodical study of various parallel genetic algorithm schemes-master-slave, island, cellular, and hybrid models, and various individual selection methods (roulette, elitist)-is carried out for this problem. Several procedures that optimize the use of the GPU's resources are presented. We conclude that the implementation that produces better results (both from the performance and the genetic algorithm fitness perspectives) is simulating a few thousands of individuals grouped in a few islands using elitist selection. This model comprises 2 mighty factors for discovering the best solutions: finding good individuals in a short number of generations, and introducing genetic diversity via a relatively frequent and numerous migration. As a result, we have even found the optimal solution for the analyzed gene regulatory network (GRN). In addition, a comparative study of the performance obtained by the different parallel implementations on GPU versus a sequential application on CPU is carried out. In our tests, a multifold speedup was obtained for our optimized parallel implementation of the method on medium class GPU over an equivalent sequential single-core implementation running on a recent Intel i7 CPU. This work can provide useful guidance to researchers in biology, medicine, or bioinformatics in how to take advantage of the parallelization on massively parallel devices and GPUs to apply novel metaheuristic algorithms powered by nature for real-world applications (like the method to solve the temporal dynamics of GRNs).

  14. Optimizing the Betts-Miller-Janjic cumulus parameterization with Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.-L.

    2015-10-01

    The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaitsgory, Vladimir, E-mail: vladimir.gaitsgory@mq.edu.au; Rossomakhine, Sergey, E-mail: serguei.rossomakhine@flinders.edu.au

    The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem ofmore » optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.« less

  16. A New Combinatorial Optimization Approach for Integrated Feature Selection Using Different Datasets: A Prostate Cancer Transcriptomic Study

    PubMed Central

    Puthiyedth, Nisha; Riveros, Carlos; Berretta, Regina; Moscato, Pablo

    2015-01-01

    Background The joint study of multiple datasets has become a common technique for increasing statistical power in detecting biomarkers obtained from smaller studies. The approach generally followed is based on the fact that as the total number of samples increases, we expect to have greater power to detect associations of interest. This methodology has been applied to genome-wide association and transcriptomic studies due to the availability of datasets in the public domain. While this approach is well established in biostatistics, the introduction of new combinatorial optimization models to address this issue has not been explored in depth. In this study, we introduce a new model for the integration of multiple datasets and we show its application in transcriptomics. Methods We propose a new combinatorial optimization problem that addresses the core issue of biomarker detection in integrated datasets. Optimal solutions for this model deliver a feature selection from a panel of prospective biomarkers. The model we propose is a generalised version of the (α,β)-k-Feature Set problem. We illustrate the performance of this new methodology via a challenging meta-analysis task involving six prostate cancer microarray datasets. The results are then compared to the popular RankProd meta-analysis tool and to what can be obtained by analysing the individual datasets by statistical and combinatorial methods alone. Results Application of the integrated method resulted in a more informative signature than the rank-based meta-analysis or individual dataset results, and overcomes problems arising from real world datasets. The set of genes identified is highly significant in the context of prostate cancer. The method used does not rely on homogenisation or transformation of values to a common scale, and at the same time is able to capture markers associated with subgroups of the disease. PMID:26106884

  17. Optimization and photomodification of extremely broadband optical response of plasmonic core-shell obscurants.

    PubMed

    de Silva, Vashista C; Nyga, Piotr; Drachev, Vladimir P

    2016-12-15

    Plasmonic resonances of the metallic shells depend on their nanostructure and geometry of the core, which can be optimized for the broadband extinction normalized by mass. The fractal nanostructures can provide a broadband extinction. It allows as well for a laser photoburning of holes in the extinction spectra and consequently windows of transparency in a controlled manner. The studied core-shell microparticles synthesized using colloidal chemistry consist of gold fractal nanostructures grown on precipitated calcium carbonate (PCC) microparticles or silica (SiO 2 ) microspheres. The optimization includes different core sizes and shapes, and shell nanostructures. It shows that the rich surface of the PCC flakes is the best core for the fractal shells providing the highest mass normalized extinction over the extremely broad spectral range. The mass normalized extinction cross section up to 3m 2 /g has been demonstrated in the broad spectral range from the visible to mid-infrared. Essentially, the broadband response is a characteristic feature of each core-shell microparticle in contrast to a combination of several structures resonant at different wavelengths, for example nanorods with different aspect ratios. The photomodification at an IR wavelength makes the window of transparency at the longer wavelength side. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. The maximum vector-angular margin classifier and its fast training on large datasets using a core vector machine.

    PubMed

    Hu, Wenjun; Chung, Fu-Lai; Wang, Shitong

    2012-03-01

    Although pattern classification has been extensively studied in the past decades, how to effectively solve the corresponding training on large datasets is a problem that still requires particular attention. Many kernelized classification methods, such as SVM and SVDD, can be formulated as the corresponding quadratic programming (QP) problems, but computing the associated kernel matrices requires O(n2)(or even up to O(n3)) computational complexity, where n is the size of the training patterns, which heavily limits the applicability of these methods for large datasets. In this paper, a new classification method called the maximum vector-angular margin classifier (MAMC) is first proposed based on the vector-angular margin to find an optimal vector c in the pattern feature space, and all the testing patterns can be classified in terms of the maximum vector-angular margin ρ, between the vector c and all the training data points. Accordingly, it is proved that the kernelized MAMC can be equivalently formulated as the kernelized Minimum Enclosing Ball (MEB), which leads to a distinctive merit of MAMC, i.e., it has the flexibility of controlling the sum of support vectors like v-SVC and may be extended to a maximum vector-angular margin core vector machine (MAMCVM) by connecting the core vector machine (CVM) method with MAMC such that the corresponding fast training on large datasets can be effectively achieved. Experimental results on artificial and real datasets are provided to validate the power of the proposed methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Do Imitation Problems Reflect a Core Characteristic in Autism? Evidence from a Literature Review

    ERIC Educational Resources Information Center

    Vanvuchelen, Marleen; Roeyers, Herbert; De Weerdt, Willy

    2011-01-01

    Although imitation problems have been associated with autism for many years, the issue if these problems are a core deficit in autism remains subject of debate. In this review article, the question if autism imitation problems fulfil the criteria of uniqueness, specificity, universality, persistency, precedence and broadness is explored and…

  20. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations

    PubMed Central

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094

  1. Passive motion paradigm: an alternative to optimal control.

    PubMed

    Mohan, Vishwanathan; Morasso, Pietro

    2011-01-01

    IN THE LAST YEARS, OPTIMAL CONTROL THEORY (OCT) HAS EMERGED AS THE LEADING APPROACH FOR INVESTIGATING NEURAL CONTROL OF MOVEMENT AND MOTOR COGNITION FOR TWO COMPLEMENTARY RESEARCH LINES: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the "degrees of freedom (DoFs) problem," the common core of production, observation, reasoning, and learning of "actions." OCT, directly derived from engineering design techniques of control systems quantifies task goals as "cost functions" and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative "softer" approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that "animates" the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints "at runtime," hence solving the "DoFs problem" without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of "potential actions." In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.

  2. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.

    PubMed

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.

  3. Negative core affect and employee silence: How differences in activation, cognitive rumination, and problem-solving demands matter.

    PubMed

    Madrid, Hector P; Patterson, Malcolm G; Leiva, Pedro I

    2015-11-01

    Employees can help to improve organizational performance by sharing ideas, suggestions, or concerns about practices, but sometimes they keep silent because of the experience of negative affect. Drawing and expanding on this stream of research, this article builds a theoretical rationale based on core affect and cognitive appraisal theories to describe how differences in affect activation and boundary conditions associated with cognitive rumination and cognitive problem-solving demands can explain employee silence. Results of a diary study conducted with professionals from diverse organizations indicated that within-person low-activated negative core affect increased employee silence when, as an invariant factor, cognitive rumination was high. Furthermore, within-person high-activated negative core affect decreased employee silence when, as an invariant factor, cognitive problem-solving demand was high. Thus, organizations should manage conditions to reduce experiences of low-activated negative core affect because these feelings increase silence in individuals high in rumination. In turn, effective management of experiences of high-activated negative core affect can reduce silence for individuals working under high problem-solving demand situations. (c) 2015 APA, all rights reserved).

  4. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  5. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    NASA Astrophysics Data System (ADS)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  6. Preparation of monolithic osmotic pump system by coating the indented core tablet.

    PubMed

    Liu, Longxiao; Che, Binjie

    2006-10-01

    A method for the preparation of monolithic osmotic pump tablet was obtained by coating the indented core tablet compressed by the punch with a needle. Atenolol was used as the model drug, sodium chloride as osmotic agent and polyethylene oxide as suspending agent. Ethyl cellulose was employed as semipermeable membrane containing polyethylene glycol 400 as plasticizer for controlling membrane permeability. The formulation of atenolol osmotic pump tablet was optimized by orthogonal design and evaluated by similarity factor (f2). The optimal formulation was evaluated in various release media and agitation rates. Indentation size of core tablet hardly affected drug release in the range of (1.00-1.14) mm. The optimal osmotic tablet was found to be able to deliver atenolol at an approximately constant rate up to 24h, independent of both release media and agitation rate. The method that is simplified by coating the indented core tablet with the elimination of laser drilling may be promising in the field of the preparation of osmotic pump tablet.

  7. Inductance optimization of miniature Broadband transformers with racetrack shaped ferrite cores for Ethernet applications

    NASA Astrophysics Data System (ADS)

    Bowen, David; Krafft, Charles; Mayergoyz, Isaak D.

    2017-05-01

    There is strong commercial interest in the ability to fabricate the windings of traditional miniature wire-wound inductive circuit components, such as Ethernet transformers, lithographically. For greater inductance devices, thick cores are required, making the process of embedding the ferrite material within circuit board one of few options for lithographic winding fabrication. In this paper, a non-traditional core shape, suitable for embedding in circuit board, is examined analytically and experimentally; the racetrack shape is two halves of a toroid connected by straight legs. With regard to the high inductance requirements for Ethernet applications (350μH), the racetrack transformer inductance is analytically optimized, determining the optimal physical dimensions. Two sizes of racetrack-core transformers were fabricated and measured. The measured inductance was in reasonable agreement with the analytical prediction, though large variations in material permeability are expected from the mechanical processing of the ferrite. Some of the experimental transformers were observed to satisfy the Ethernet inductance requirement.

  8. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  9. Binary optimization for source localization in the inverse problem of ECG.

    PubMed

    Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf

    2014-09-01

    The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.

  10. Parallel and Preemptable Dynamically Dimensioned Search Algorithms for Single and Multi-objective Optimization in Water Resources

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.

    2015-12-01

    We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.

  11. Hamiltonian Systems and Optimal Control in Computational Anatomy: 100 Years Since D'Arcy Thompson.

    PubMed

    Miller, Michael I; Trouvé, Alain; Younes, Laurent

    2015-01-01

    The Computational Anatomy project is the morphome-scale study of shape and form, which we model as an orbit under diffeomorphic group action. Metric comparison calculates the geodesic length of the diffeomorphic flow connecting one form to another. Geodesic connection provides a positioning system for coordinatizing the forms and positioning their associated functional information. This article reviews progress since the Euler-Lagrange characterization of the geodesics a decade ago. Geodesic positioning is posed as a series of problems in Hamiltonian control, which emphasize the key reduction from the Eulerian momentum with dimension of the flow of the group, to the parametric coordinates appropriate to the dimension of the submanifolds being positioned. The Hamiltonian viewpoint provides important extensions of the core setting to new, object-informed positioning systems. Several submanifold mapping problems are discussed as they apply to metamorphosis, multiple shape spaces, and longitudinal time series studies of growth and atrophy via shape splines.

  12. Design of Energy Storage Management System Based on FPGA in Micro-Grid

    NASA Astrophysics Data System (ADS)

    Liang, Yafeng; Wang, Yanping; Han, Dexiao

    2018-01-01

    Energy storage system is the core to maintain the stable operation of smart micro-grid. Aiming at the existing problems of the energy storage management system in the micro-grid such as Low fault tolerance, easy to cause fluctuations in micro-grid, a new intelligent battery management system based on field programmable gate array is proposed : taking advantage of FPGA to combine the battery management system with the intelligent micro-grid control strategy. Finally, aiming at the problem that during estimation of battery charge State by neural network, initialization of weights and thresholds are not accurate leading to large errors in prediction results, the genetic algorithm is proposed to optimize the neural network method, and the experimental simulation is carried out. The experimental results show that the algorithm has high precision and provides guarantee for the stable operation of micro-grid.

  13. MUTILS - a set of efficient modeling tools for multi-core CPUs implemented in MEX

    NASA Astrophysics Data System (ADS)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2013-04-01

    The need for computational performance is common in scientific applications, and in particular in numerical simulations, where high resolution models require efficient processing of large amounts of data. Especially in the context of geological problems the need to increase the model resolution to resolve physical and geometrical complexities seems to have no limits. Alas, the performance of new generations of CPUs does not improve any longer by simply increasing clock speeds. Current industrial trends are to increase the number of computational cores. As a result, parallel implementations are required in order to fully utilize the potential of new processors, and to study more complex models. We target simulations on small to medium scale shared memory computers: laptops and desktop PCs with ~8 CPU cores and up to tens of GB of memory to high-end servers with ~50 CPU cores and hundereds of GB of memory. In this setting MATLAB is often the environment of choice for scientists that want to implement their own models with little effort. It is a useful general purpose mathematical software package, but due to its versatility some of its functionality is not as efficient as it could be. In particular, the challanges of modern multi-core architectures are not fully addressed. We have developed MILAMIN 2 - an efficient FEM modeling environment written in native MATLAB. Amongst others, MILAMIN provides functions to define model geometry, generate and convert structured and unstructured meshes (also through interfaces to external mesh generators), compute element and system matrices, apply boundary conditions, solve the system of linear equations, address non-linear and transient problems, and perform post-processing. MILAMIN strives to combine the ease of code development and the computational efficiency. Where possible, the code is optimized and/or parallelized within the MATLAB framework. Native MATLAB is augmented with the MUTILS library - a set of MEX functions that implement the computationally intensive, performance critical parts of the code, which we have identified to be bottlenecks. Here, we discuss the functionality and performance of the MUTILS library. Currently, it includes: 1. time and memory efficient assembly of sparse matrices for FEM simulations 2. parallel sparse matrix - vector product with optimizations speficic to symmetric matrices and multiple degrees of freedom per node 3. parallel point in triangle location and point in tetrahedron location for unstructured, adaptive 2D and 3D meshes (useful for 'marker in cell' type of methods) 4. parallel FEM interpolation for 2D and 3D meshes of elements of different types and orders, and for different number of degrees of freedom per node 5. a stand-alone, MEX implementation of the Conjugate Gradients iterative solver 6. interface to METIS graph partitioning and a fast implementation of RCM reordering

  14. COPS: Large-scale nonlinearly constrained optimization problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bondarenko, A.S.; Bortz, D.M.; More, J.J.

    2000-02-10

    The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.

  15. Design and optimization of a flexible high-peak-power laser-to-fiber coupled illumination system used in digital particle image velocimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Ronald A.; Ilev, Ilko K.

    We present a study on the design and parameter optimization of a flexible high-peak-power fiber-optic laser delivery system using commercially available solid-core silica fibers and an experimental glass hollow waveguide (HW). The fiber-optic delivery system provides a flexible, safe, and easily and precisely positioned laser irradiation for many applications including uniform illumination for digital particle image velocimetry (DPIV). The delivery fibers, when coupled through a line-generating lens, produce a uniform thin laser sheet illumination for accurate and repeatable DPIV two-dimensional velocity measurements. We report experimental results on homogenizing the laser beam profile using various mode-mixing techniques. Furthermore, because a fundamentalmore » problem for fiber-optic-based high-peak-power laser delivery systems is the possible damage effects of the fiber material, we determine experimentally the peak power density damage threshold of various delivery fibers designed for the visible spectral range at a typical DPIV laser wavelength of 532 nm. In the case of solid-core silica delivery fibers using conventional lens-based laser-to-fiber coupling, the damage threshold varies from 3.7 GW/cm{sup 2} for a 100-{mu}m-core-diameter high-temperature fiber to 3.9 GW/cm{sup 2} for a 200-{mu}m-core-diameter high-power delivery fiber, with a total output laser energy delivered of at least 3-10 mJ for those respective fibers. Therefore, these fibers are marginally suitable for most macro-DPIV applications. However, to improve the high-power delivery capability for close-up micro-DPIV applications, we propose and validate an experimental fiber link with much higher laser power delivery capability than the solid-core fiber links. We use an uncoated grazing-incidence-based tapered glass funnel coupled to a glass HW with hollow air-core diameter of 700 {mu}m, a low numerical aperture of 0.05, and a thin inside cladding of cyclic olefin polymer coating for optimum transmission at 532 nm. Because of the mode homogenizing effect and lower power density, the taper-waveguide laser delivery technique ensured high damage threshold for the delivery HW, and as a result, no damage occurred at the maximum measured input laser energy of 33 mJ used in this study.« less

  16. P-Hint-Hunt: a deep parallelized whole genome DNA methylation detection tool.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Gao, Ming; Liao, Xiangke; Liu, Jie; Yang, Canqun; Wu, Chengkun; Yu, Wenqiang

    2017-03-14

    The increasing studies have been conducted using whole genome DNA methylation detection as one of the most important part of epigenetics research to find the significant relationships among DNA methylation and several typical diseases, such as cancers and diabetes. In many of those studies, mapping the bisulfite treated sequence to the whole genome has been the main method to study DNA cytosine methylation. However, today's relative tools almost suffer from inaccuracies and time-consuming problems. In our study, we designed a new DNA methylation prediction tool ("Hint-Hunt") to solve the problem. By having an optimal complex alignment computation and Smith-Waterman matrix dynamic programming, Hint-Hunt could analyze and predict the DNA methylation status. But when Hint-Hunt tried to predict DNA methylation status with large-scale dataset, there are still slow speed and low temporal-spatial efficiency problems. In order to solve the problems of Smith-Waterman dynamic programming and low temporal-spatial efficiency, we further design a deep parallelized whole genome DNA methylation detection tool ("P-Hint-Hunt") on Tianhe-2 (TH-2) supercomputer. To the best of our knowledge, P-Hint-Hunt is the first parallel DNA methylation detection tool with a high speed-up to process large-scale dataset, and could run both on CPU and Intel Xeon Phi coprocessors. Moreover, we deploy and evaluate Hint-Hunt and P-Hint-Hunt on TH-2 supercomputer in different scales. The experimental results illuminate our tools eliminate the deviation caused by bisulfite treatment in mapping procedure and the multi-level parallel program yields a 48 times speed-up with 64 threads. P-Hint-Hunt gain a deep acceleration on CPU and Intel Xeon Phi heterogeneous platform, which gives full play of the advantages of multi-cores (CPU) and many-cores (Phi).

  17. Generalized bipartite quantum state discrimination problems with sequential measurements

    NASA Astrophysics Data System (ADS)

    Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki

    2018-02-01

    We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.

  18. A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.

  19. Delivery of prazosin hydrochloride from osmotic pump system prepared by coating the core tablet with an indentation.

    PubMed

    Liu, Longxiao; Wang, Jinchao; Zhu, Suyan

    2007-04-01

    The preparation of an osmotic pump tablet was simplified by elimination of laser drilling using prazosin hydrochloride as the model drug. The osmotic pump system was obtained by coating the indented core tablet compressed by the punch with a needle. A multiple regression equation was achieved with the experimental data of core tablet formulations, and then the formulation was optimized. The influences of the indentation size of the core tablet, environmental media, and agitation rate on drug release profile were investigated. The optimal osmotic pump tablet was found to deliver prazosin hydrochloride at an approximately constant rate up to 24 hr, and independent on both release media and agitation rate. Indentation size of core tablet hardly affected drug release in the range of 0.80-1.15 mm. The method that is simplified by elimination of laser drilling may be promising for preparation of an osmotic pump tablet.

  20. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.

    PubMed

    Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal

    2010-11-15

    Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for any sequence assembly program based on Eulerian approach. Our algorithms for constructing Bi-directed de Bruijn graphs are efficient in parallel and out of core settings. These algorithms can be used in building large scale bi-directed de Bruijn graphs. Furthermore, our algorithms do not employ any all-to-all communications in a parallel setting and perform better than the prior algorithms. Finally our out-of-core algorithm is extremely memory efficient and can replace the existing graph construction algorithm in VELVET.

  1. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  2. Numerical optimization of three-dimensional coils for NSTX-U

    NASA Astrophysics Data System (ADS)

    Lazerson, S. A.; Park, J.-K.; Logan, N.; Boozer, A.

    2015-10-01

    A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n  =  1 character can drive a large core torque. It is also shown that fields with n  =  3 features are capable of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. Comparison between error field correction experiments on DIII-D and the optimizer show good agreement. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive,paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  3. Critical Resolution and Physical Dependenices of Supernovae: Stars in Heat and Under Pressure

    NASA Astrophysics Data System (ADS)

    Vartanyan, David; Burrows, Adam Seth

    2017-01-01

    For over five decades, the mechanism of explosion in core-collapse supernova continues to remain one of the last untoppled bastions in astrophysics, presenting both a technical and physical problem.Motivated by advances in computation and nuclear physics and the resilience of the core-collapse problem, collaborators Adam Burrows (Princeton), Joshua Dolence (LANL), and Aaron Skinner (LNL) have developed FORNAX - a highly parallelizable multidimensional supernova simulation code featuring an explicit hydrodynamic and radiation-transfer solver.We present the results (Vartanyan et. al 2016, Burrows et. al 2016, both in preparation) of a sequence of two-dimensional axisymmetric simulations of core-collapse supernovae using FORNAX, probing both progenitor mass dependence and the effect of physical inputs in explosiveness in our study on the revival of the stalled shock via the neutrino heating mechanism. We also performed a resolution study, testing spatial and energy group resolutions as well as compilation flags. We illustrate that, when the protoneutron star bounded by a stalled shock is close to the critical explosion condition (Burrows & Goshy 1993), small changes of order 10% in neutrino energies and luminosities can result in explosion, and that these effects couple nonlinearly.We show that many-body medium effects due to neutrino-nucleon scattering as well as inelastic neutrino-nucleon and neutrino-electron scattering are strongly favorable to earlier and more vigorous explosions by depositing energy in the gain region. Additionally, we probe the effects of a ray-by-ray+ transport solver (which does not include transverse velocity terms) employed by many groups and confirm that it artificially accelerates explosion (see also Skinner et. al 2016).In the coming year, we are gearing up for the first set of 3D simulations yet performed in the context of core-collapse supernovae employing 20 energy groups, and one of the most complete nuclear physics modules in the field with the ambitious goal of simulating supernova remants like Cas A. The current environment for core-collapse supernova provides for invigorating optimism that a robust explosion mechanism is within reach on graduate student lifetimes.

  4. Creation of an Upper Stage Trajectory Capability Boundary to Enable Booster System Trade Space Exploration

    NASA Technical Reports Server (NTRS)

    Walsh, Ptrick; Coulon, Adam; Edwards, Stephen; Mavris, Dimitri N.

    2012-01-01

    The problem of trajectory optimization is important in all space missions. The solution of this problem enables one to specify the optimum thrust steering program which should be followed to achieve a specified mission objective, simultaneously satisfying the constraints.1 It is well known that whether or not the ascent trajectory is optimal can have a significant impact on propellant usage for a given payload, or on payload weight for the same gross vehicle weight.2 Consequently, ascent guidance commands are usually optimized in some fashion. Multi-stage vehicles add complexity to this analysis process as changes in vehicle properties in one stage propagate to the other stages through gear ratios and changes in the optimal trajectory. These effects can cause an increase in analysis time as more variables are added and convergence of the optimizer to system closure requires more analysis iterations. In this paper, an approach to simplifying this multi-stage problem through the creation of an upper stage capability boundary is presented. This work was completed as part of a larger study focused on trade space exploration for the advanced booster system that will eventually form a part of NASA s new Space Launch System.3 The approach developed leverages Design of Experiments and Surrogate Modeling4 techniques to create a predictive model of the SLS upper stage performance. The design of the SLS core stages is considered fixed for the purposes of this study, which results in trajectory parameters such as staging conditions being the only variables relevant to the upper stage. Through the creation of a surrogate model, which takes staging conditions as inputs and predicts the payload mass delivered by the SLS upper stage to a reference orbit as the response, it is possible to identify a "surface" of staging conditions which all satisfy the SLS requirement of placing 130 metric tons into low-Earth orbit (LEO).3 This identified surface represents the 130 metric ton capability boundary for the upper stage, such that if the combined first stage and boosters can achieve any one staging point on that surface, then the design is identified as feasible. With the surrogate model created, design and analysis of advanced booster concepts is streamlined, as optimization of the upper stage trajectory is no longer required in every design loop.

  5. Low-Velocity Impact Response of Sandwich Beams with Functionally Graded Core

    NASA Technical Reports Server (NTRS)

    Apetre, N. A.; Sankar, B. V.; Ambur, D. R.

    2006-01-01

    The problem of low-speed impact of a one-dimensional sandwich panel by a rigid cylindrical projectile is considered. The core of the sandwich panel is functionally graded such that the density, and hence its stiffness, vary through the thickness. The problem is a combination of static contact problem and dynamic response of the sandwich panel obtained via a simple nonlinear spring-mass model (quasi-static approximation). The variation of core Young's modulus is represented by a polynomial in the thickness coordinate, but the Poisson's ratio is kept constant. The two-dimensional elasticity equations for the plane sandwich structure are solved using a combination of Fourier series and Galerkin method. The contact problem is solved using the assumed contact stress distribution method. For the impact problem we used a simple dynamic model based on quasi-static behavior of the panel - the sandwich beam was modeled as a combination of two springs, a linear spring to account for the global deflection and a nonlinear spring to represent the local indentation effects. Results indicate that the contact stiffness of thc beam with graded core Increases causing the contact stresses and other stress components in the vicinity of contact to increase. However, the values of maximum strains corresponding to the maximum impact load arc reduced considerably due to grading of thc core properties. For a better comparison, the thickness of the functionally graded cores was chosen such that the flexural stiffness was equal to that of a beam with homogeneous core. The results indicate that functionally graded cores can be used effectively to mitigate or completely prevent impact damage in sandwich composites.

  6. Biospecimen Core Resource - TCGA

    Cancer.gov

    The Cancer Genome Atlas (TCGA) Biospecimen Core Resource centralized laboratory reviews and processes blood and tissue samples and their associated data using optimized standard operating procedures for the entire TCGA Research Network.

  7. Blast protection of infrastructure using advanced composites

    NASA Astrophysics Data System (ADS)

    Brodsky, Evan

    This research was a systematic investigation detailing the energy absorption mechanisms of an E-glass web core composite sandwich panel subjected to an impulse loading applied orthogonal to the facesheet. Key roles of the fiberglass and polyisocyanurate foam material were identified, characterized, and analyzed. A quasi-static test fixture was used to compressively load a unit cell web core specimen machined from the sandwich panel. The web and foam both exhibited non-linear stress-strain responses during axial compressive loading. Through several analyses, the composite web situated in the web core had failed in axial compression. Optimization studies were performed on the sandwich panel unit cell in order to maximize the energy absorption capabilities of the web core. Ultimately, a sandwich panel was designed to optimize the energy dissipation subjected to through-the-thickness compressive loading.

  8. Entropic One-Class Classifiers.

    PubMed

    Livi, Lorenzo; Sadeghian, Alireza; Pedrycz, Witold

    2015-12-01

    The one-class classification problem is a well-known research endeavor in pattern recognition. The problem is also known under different names, such as outlier and novelty/anomaly detection. The core of the problem consists in modeling and recognizing patterns belonging only to a so-called target class. All other patterns are termed nontarget, and therefore, they should be recognized as such. In this paper, we propose a novel one-class classification system that is based on an interplay of different techniques. Primarily, we follow a dissimilarity representation-based approach; we embed the input data into the dissimilarity space (DS) by means of an appropriate parametric dissimilarity measure. This step allows us to process virtually any type of data. The dissimilarity vectors are then represented by weighted Euclidean graphs, which we use to determine the entropy of the data distribution in the DS and at the same time to derive effective decision regions that are modeled as clusters of vertices. Since the dissimilarity measure for the input data is parametric, we optimize its parameters by means of a global optimization scheme, which considers both mesoscopic and structural characteristics of the data represented through the graphs. The proposed one-class classifier is designed to provide both hard (Boolean) and soft decisions about the recognition of test patterns, allowing an accurate description of the classification process. We evaluate the performance of the system on different benchmarking data sets, containing either feature-based or structured patterns. Experimental results demonstrate the effectiveness of the proposed technique.

  9. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  10. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  11. Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin E.; Zeiler, Tom

    2012-01-01

    This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.

  12. Applying graph partitioning methods in measurement-based dynamic load balancing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav; Fourestier, Sebastien; Menon, Harshitha

    Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, formore » measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.« less

  13. Synthesis of fluorescent core-shell nanomaterials and strategies to generate white light

    NASA Astrophysics Data System (ADS)

    Singh, Amandeep; Kaur, Ramanjot; Pandey, O. P.; Wei, Xueyong; Sharma, Manoj

    2015-07-01

    In this work, cadmium free core-shell ZnS:X/ZnS (X = Mn, Cu) nanoparticles have been synthesized and used for white light generation. First, the doping concentration of Manganese (Mn) was varied from 1% to 4% to optimize the dopant related emission and its optimal value was found to be 1%. Then, ZnS shell was grown over ZnS:Mn(1%) core to passivate the surface defects. Similarly, the optimal concentration of Copper (Cu) was found to be 0.8% in the range varied from 0.6% to 1.2%. In order to obtain an emission in the whole visible spectrum, dual doping of Mn and Cu was done in the core and the shell, respectively. A solid-solid mixing in different ratios of separately doped quantum dots (QDs) emitting in the blue green and the orange region was performed. Results show that the optimum mixture of QDs excited at 300 nm gives Commission Internationale del'Éclairage color coordinates of (0.35, 0.36), high color rendering index of 88, and correlated color temperature of 4704 K with minimum self-absorption.

  14. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  15. A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search

    NASA Astrophysics Data System (ADS)

    Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.

  16. [Care continuity for patients with Prader-Willi syndrome during transition from childhood to adulthood].

    PubMed

    Saitoh, Shinji

    2010-01-01

    Prader-Willi syndrome(PWS) is a complex multisystem genetic disorder, of which characteristic phenotypes include neonatal hypotonia, hyperphagia resulting in obesity, mental retardation, hypogonadism, and behavioral and psychiatric problems. The diagnosis can be obtained as early as during neonatal period thanks to development of genetic testing. Clinical features of PWS will change depending on age, although core phenotypes of hyperphagia, obesity and psychiatric issues stay for lifetime. Therefore, integrated multidisciplinary approach starting from neonatal period is mandatory to ensure optimal management to improve lifelong quality of life. For successful transition from childhood to adulthood, multidisciplinary team need to share clinical information, and should keep the same policy about food, environment and psychiatric issues.

  17. Method for depleting BWRs using optimal control rod patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1991-01-01

    Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonicsmore » calculations.« less

  18. Optimizing performance by improving core stability and core strength.

    PubMed

    Hibbs, Angela E; Thompson, Kevin G; French, Duncan; Wrigley, Allan; Spears, Iain

    2008-01-01

    Core stability and core strength have been subject to research since the early 1980s. Research has highlighted benefits of training these processes for people with back pain and for carrying out everyday activities. However, less research has been performed on the benefits of core training for elite athletes and how this training should be carried out to optimize sporting performance. Many elite athletes undertake core stability and core strength training as part of their training programme, despite contradictory findings and conclusions as to their efficacy. This is mainly due to the lack of a gold standard method for measuring core stability and strength when performing everyday tasks and sporting movements. A further confounding factor is that because of the differing demands on the core musculature during everyday activities (low load, slow movements) and sporting activities (high load, resisted, dynamic movements), research performed in the rehabilitation sector cannot be applied to the sporting environment and, subsequently, data regarding core training programmes and their effectiveness on sporting performance are lacking. There are many articles in the literature that promote core training programmes and exercises for performance enhancement without providing a strong scientific rationale of their effectiveness, especially in the sporting sector. In the rehabilitation sector, improvements in lower back injuries have been reported by improving core stability. Few studies have observed any performance enhancement in sporting activities despite observing improvements in core stability and core strength following a core training programme. A clearer understanding of the roles that specific muscles have during core stability and core strength exercises would enable more functional training programmes to be implemented, which may result in a more effective transfer of these skills to actual sporting activities.

  19. Incorporating the Common Core's Problem Solving Standard for Mathematical Practice into an Early Elementary Inclusive Classroom

    ERIC Educational Resources Information Center

    Fletcher, Nicole

    2014-01-01

    Mathematics curriculum designers and policy decision makers are beginning to recognize the importance of problem solving, even at the earliest stages of mathematics learning. The Common Core includes sense making and perseverance in solving problems in its standards for mathematical practice for students at all grade levels. Incorporating problem…

  20. Promoting Access to Common Core Mathematics for Students with Severe Disabilities through Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Spooner, Fred; Saunders, Alicia; Root, Jenny; Brosh, Chelsi

    2017-01-01

    There is a need to teach the pivotal skill of mathematical problem solving to students with severe disabilities, moving beyond basic skills like computation to higher level thinking skills. Problem solving is emphasized as a Standard for Mathematical Practice in the Common Core State Standards across grade levels. This article describes a…

  1. Horticulture Materials for Agricultural Education Programs. Core Agricultural Education Curriculum, Central Cluster.

    ERIC Educational Resources Information Center

    Illinois Univ., Urbana. Office of Agricultural Communications and Education.

    This curriculum guide contains five units with relevant problem areas for horticulture. These problem areas have been selected as suggested areas of study to be included in a core curriculum for secondary students enrolled in an agricultural education program. Each problem area includes some or all of the following components: related problem…

  2. Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

    2002-01-01

    The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.

  3. Sequential estimation of intrinsic activity and synaptic input in single neurons by particle filtering with optimal importance density

    NASA Astrophysics Data System (ADS)

    Closas, Pau; Guillamon, Antoni

    2017-12-01

    This paper deals with the problem of inferring the signals and parameters that cause neural activity to occur. The ultimate challenge being to unveil brain's connectivity, here we focus on a microscopic vision of the problem, where single neurons (potentially connected to a network of peers) are at the core of our study. The sole observation available are noisy, sampled voltage traces obtained from intracellular recordings. We design algorithms and inference methods using the tools provided by stochastic filtering that allow a probabilistic interpretation and treatment of the problem. Using particle filtering, we are able to reconstruct traces of voltages and estimate the time course of auxiliary variables. By extending the algorithm, through PMCMC methodology, we are able to estimate hidden physiological parameters as well, like intrinsic conductances or reversal potentials. Last, but not least, the method is applied to estimate synaptic conductances arriving at a target cell, thus reconstructing the synaptic excitatory/inhibitory input traces. Notably, the performance of these estimations achieve the theoretical lower bounds even in spiking regimes.

  4. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  5. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  6. Acceleration of the Particle Swarm Optimization for Peierls-Nabarro modeling of dislocations in conventional and high-entropy alloys

    NASA Astrophysics Data System (ADS)

    Pei, Zongrui; Eisenbach, Markus

    2017-06-01

    Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), the local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.

  7. Variational data assimilation for the initial-value dynamo problem.

    PubMed

    Li, Kuan; Jackson, Andrew; Livermore, Philip W

    2011-11-01

    The secular variation of the geomagnetic field as observed at the Earth's surface results from the complex magnetohydrodynamics taking place in the fluid core of the Earth. One way to analyze this system is to use the data in concert with an underlying dynamical model of the system through the technique of variational data assimilation, in much the same way as is employed in meteorology and oceanography. The aim is to discover an optimal initial condition that leads to a trajectory of the system in agreement with observations. Taking the Earth's core to be an electrically conducting fluid sphere in which convection takes place, we develop the continuous adjoint forms of the magnetohydrodynamic equations that govern the dynamical system together with the corresponding numerical algorithms appropriate for a fully spectral method. These adjoint equations enable a computationally fast iterative improvement of the initial condition that determines the system evolution. The initial condition depends on the three dimensional form of quantities such as the magnetic field in the entire sphere. For the magnetic field, conservation of the divergence-free condition for the adjoint magnetic field requires the introduction of an adjoint pressure term satisfying a zero boundary condition. We thus find that solving the forward and adjoint dynamo system requires different numerical algorithms. In this paper, an efficient algorithm for numerically solving this problem is developed and tested for two illustrative problems in a whole sphere: one is a kinematic problem with prescribed velocity field, and the second is associated with the Hall-effect dynamo, exhibiting considerable nonlinearity. The algorithm exhibits reliable numerical accuracy and stability. Using both the analytical and the numerical techniques of this paper, the adjoint dynamo system can be solved directly with the same order of computational complexity as that required to solve the forward problem. These numerical techniques form a foundation for ultimate application to observations of the geomagnetic field over the time scale of centuries.

  8. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  9. Optimal design for crosstalk analysis in 12-core 5-LP mode homogeneous multicore fiber for different lattice structure

    NASA Astrophysics Data System (ADS)

    Kumar, Dablu; Ranjan, Rakesh

    2018-03-01

    12-Core 5-LP mode homogeneous multicore fibers have been proposed for analysis of inter-core crosstalk and dispersion, with four different lattice structures (circular, 2-ring, square lattice, and triangular lattice) having cladding diameter of 200 μm and a fixed cladding thickness of 35 μm. The core-to-core crosstalk impact has been studied numerically with respect to bending radius, core pitch, transmission distance, wavelength, and core diameter for all 5-LP modes. In anticipation of further reduction in crosstalk levels, the trench-assisted cores have been incorporated for all respective designs. Ultra-low crosstalk (-138 dB/100 km) has been achieved through the triangular lattice arrangement, with trench depth Δ2 = -1.40% for fundamental (LP01) mode. It has been noted that the impact of mode polarization on crosstalk behavior is minor, with difference in crosstalk levels between two polarized spatial modes as ≤0.2 dB. Moreover, the optimized cladding diameter has been obtained for all 5-LP modes for a target value of crosstalk of -50 dB/100 km, with all the core arrangements. The dispersion characteristic has also been analyzed with respect to wavelength, which is nearly 2.5 ps/nm km at operating wavelength 1550 nm. The relative core multiplicity factor (RCMF) for the proposed design is obtained as 64.

  10. Merging spatially variant physical process models under an optimized systems dynamics framework.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cain, William O.; Lowry, Thomas Stephen; Pierce, Suzanne A.

    The complexity of water resource issues, its interconnectedness to other systems, and the involvement of competing stakeholders often overwhelm decision-makers and inhibit the creation of clear management strategies. While a range of modeling tools and procedures exist to address these problems, they tend to be case specific and generally emphasize either a quantitative and overly analytic approach or present a qualitative dialogue-based approach lacking the ability to fully explore consequences of different policy decisions. The integration of these two approaches is needed to drive toward final decisions and engender effective outcomes. Given these limitations, the Computer Assisted Dispute Resolution systemmore » (CADRe) was developed to aid in stakeholder inclusive resource planning. This modeling and negotiation system uniquely addresses resource concerns by developing a spatially varying system dynamics model as well as innovative global optimization search techniques to maximize outcomes from participatory dialogues. Ultimately, the core system architecture of CADRe also serves as the cornerstone upon which key scientific innovation and challenges can be addressed.« less

  11. Discussion on teaching reform of environmental planning and management

    NASA Astrophysics Data System (ADS)

    Zhang, Qiugen; Chen, Suhua; Xie, Yu; Wei, Li'an; Ding, Yuan

    2018-05-01

    The curriculum of environmental planning and management is an environmental engineering major curriculum established by the teaching steering committee of environmental science and engineering of Education Ministry, which is the core curriculum of Chinese engineering education professional certification. It plays an important role in cultivating environmental planning and environmental management ability of environmental engineering major. The selection and optimization of the course teaching content of environmental planning and management were discussed which including curriculum teaching content updating and optimizing and teaching resource system construction. The comprehensive application of teaching method was discussed which including teaching method synthesis and teaching method. The final combination of the assessment method was also discussed which including the formative assessment normal grades and the final result of the course examination. Through the curriculum comprehensive teaching reform, students' knowledge had been broadened, the subject status and autonomy of learning had been enhanced, students' learning interest had been motivated, the ability of students' finding, analyzing and solving problems had been improved. Students' innovative ability and positive spirit had been well cultivated.

  12. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  13. Expanding the scope of health information systems. Challenges and developments.

    PubMed

    Kuhn, K A; Wurst, S H R; Bott, O J; Giuse, D A

    2006-01-01

    To identify current challenges and developments in health information systems. Reports on HIS, eHealth and process support were analyzed, core problems and challenges were identified. Health information systems are extending their scope towards regional networks and health IT infrastructures. Integration, interoperability and interaction design are still today's core problems. Additional problems arise through the integration of genetic information into the health care process. There are noticeable trends towards solutions for these problems.

  14. LDRD Final Report: Global Optimization for Engineering Science Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.

    1999-12-01

    For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.

  15. Feasibility of optimizing trimetazidine dihydrochloride release from controlled porosity osmotic pump tablets of directly compressed cores

    PubMed Central

    Habib, Basant A.; Rehim, Randa T. Abd El; Nour, Samia A.

    2013-01-01

    The aim of this study was to develop and optimize Trimetazidine dihydrochloride (TM) controlled porosity osmotic pump (CPOP) tablets of directly compressed cores. A 23 full factorial design was used to study the influence of three factors namely: PEG400 (10% and 25% based on coating polymer weight), coating level (10% and 20% of tablet core weight) and hole diameter (0 “no hole” and 1 mm). Other variables such as tablet cores, coating mixture of ethylcellulose (4%) and dibutylphthalate (2%) in 95% ethanol and pan coating conditions were kept constant. The responses studied (Yi) were cumulative percentage released after 2 h (Q%2h), 6 h (Q%6h), 12 h (Q%12h) and regression coefficient of release data fitted to zero order equation (RSQzero), for Y1, Y2, Y3, and Y4, respectively. Polynomial equations were used to study the influence of different factors on each response individually. Response surface methodology and multiple response optimization were used to search for an optimized formula. Response variables for the optimized formula were restricted to 10% ⩽ Y1 ⩽ 20%, 40% ⩽ Y2 ⩽ 60%, 80% ⩽ Y3 ⩽ 100%, and Y4 > 0.9. The statistical analysis of the results revealed that PEG400 had positive effects on Q%2h, Q%6h and Q%12h, hole diameter had positive effects on all responses and coating level had positive effect on Q%6h, Q%12h and negative effect on RSQzero. Full three factor interaction (3FI) equations were used for representation of all responses except Q%2h which was represented by reduced (3FI) equation. Upon exploring the experimental space, no formula in the tested range could satisfy the required constraints. Thus, direct compression of TM cores was not suitable for formation of CPOP tablets. Preliminary trials of CPOP tablets with wet granulated cores were promising with an intact membrane for 12 h and high RSQzero. Further improvement of these formulations to optimize TM release will be done in further studies. PMID:25685502

  16. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  17. Research on NC laser combined cutting optimization model of sheet metal parts

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Zhang, Y. L.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    The optimization problem for NC laser combined cutting of sheet metal parts was taken as the research object in this paper. The problem included two contents: combined packing optimization and combined cutting path optimization. In the problem of combined packing optimization, the method of “genetic algorithm + gravity center NFP + geometric transformation” was used to optimize the packing of sheet metal parts. In the problem of combined cutting path optimization, the mathematical model of cutting path optimization was established based on the parts cutting constraint rules of internal contour priority and cross cutting. The model played an important role in the optimization calculation of NC laser combined cutting.

  18. On the improvement of blood sample collection at clinical laboratories

    PubMed Central

    2014-01-01

    Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140

  19. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  20. An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.

    PubMed

    Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur

    2017-01-01

    Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level  leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.

  1. Optimal Price Decision Problem for Simultaneous Multi-article Auction and Its Optimal Price Searching Method by Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Masuda, Kazuaki; Aiyoshi, Eitaro

    We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.

  2. Techniques for shuttle trajectory optimization

    NASA Technical Reports Server (NTRS)

    Edge, E. R.; Shieh, C. J.; Powers, W. F.

    1973-01-01

    The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.

  3. On l(1): Optimal decentralized performance

    NASA Technical Reports Server (NTRS)

    Sourlas, Dennis; Manousiouthakis, Vasilios

    1993-01-01

    In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.

  4. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  5. Time-domain finite elements in optimal control with application to launch-vehicle guidance. PhD. Thesis

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.

    1991-01-01

    A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.

  6. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  7. Gas turbine bucket wall thickness control

    DOEpatents

    Stathopoulos, Dimitrios; Xu, Liming; Lewis, Doyle C.

    2002-01-01

    A core for use in casting a turbine bucket including serpentine cooling passages is divided into two pieces including a leading edge core section and a trailing edge core section. Wall thicknesses at the leading edge and the trailing edge of the turbine bucket can be controlled independent of each other by separately positioning the leading edge core section and the trailing edge core section in the casting die. The controlled leading and trailing edge thicknesses can thus be optimized for efficient cooling, resulting in more efficient turbine operation.

  8. Optimization of the injection molding process for development of high performance calcium oxide -based ceramic cores

    NASA Astrophysics Data System (ADS)

    Zhou, P. P.; Wu, G. Q.; Tao, Y.; Cheng, X.; Zhao, J. Q.; Nan, H.

    2018-02-01

    The binder composition used for ceramic injection molding plays a crucial role on the final properties of sintered ceramic and to avoid defects on green parts. In this study, the effects of binder compositions on the rheological, microstructures and the mechanical properties of CaO based ceramic cores were investigated. It was found that the optimized formulation for dispersant, solid loading was 1.5 wt% and 84 wt%, respectively. The microstructures, such as porosity, pore size distribution and grain boundary density were closely related to the plasticizer contents. The decrease of plasticizer contents can enhance the strength of the ceramic cores but with decreased shrinkage. Meanwhile, the creep resistance of ceramic cores was enhanced by decreasing of plasticizer contents. The flexural strength of the core was found to decrease with the increase of the porosity, the improvement of creep resistance is closely related to the decrease of porosity and grain boundary density.

  9. Influence of particle size and shell thickness of core-shell packing materials on optimum experimental conditions in preparative chromatography.

    PubMed

    Horváth, Krisztián; Felinger, Attila

    2015-08-14

    The applicability of core-shell phases in preparative separations was studied by a modeling approach. The preparative separations were optimized for two compounds having bi-Langmuir isotherms. The differential mass balance equation of chromatography was solved by the Rouchon algorithm. The results show that as the size of the core increases, larger particles can be used in separations, resulting in higher applicable flow rates, shorter cycle times. Due to the decreasing volume of porous layer, the loadability of the column dropped significantly. As a result, the productivity and economy of the separation decreases. It is shown that if it is possible to optimize the size of stationary phase particles for the given separation task, the use of core-shell phases are not beneficial. The use of core-shell phases proved to be advantageous when the goal is to build preparative column for general purposes (e.g. for purification of different products) in small scale separations. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Steady induction effects in geomagnetism. Part 1A: Steady motional induction of geomagnetic chaos

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1992-01-01

    Geomagnetic effects of magnetic induction by hypothetically steady fluid motion and steady magnetic flux diffusion near the top of Earth's core are investigated using electromagnetic theory, simple magnetic earth models, and numerical experiments with geomagnetic field models. The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation indicated by broad-scale models of the observed geomagnetic field is examined and solved. In Part 1, the steady surficial core flow estimation problem is solved in the context of the source-free mantle/frozen-flux core model. In the first paper (IA), the theory underlying such estimates is reviewed and some consequences of various kinematic and dynamic flow hypotheses are derived. For a frozen-flux core, fluid downwelling is required to change the mean square normal magnetic flux density averaged over the core-mantle boundary. For surficially geostrophic flow, downwelling implies poleward flow. The solution of the forward steady motional induction problem at the surface of a frozen-flux core is derived and found to be a fine, easily visualized example of deterministic chaos. Geomagnetic effects of statistically steady core surface flow may well dominate secular variation over several decades. Indeed, effects of persistent, if not steady, surficially geostrophic core flow are described which may help explain certain features of the present broad-scale geomagnetic field and perhaps paleomagnetic secular variation.

  11. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  12. A benchmarking tool to evaluate computer tomography perfusion infarct core predictions against a DWI standard.

    PubMed

    Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G

    2016-10-01

    Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of <38%; at this threshold, the mean difference was 0.3 ml (SD 19.8 ml), the mean absolute difference was 14.3 (SD 13.7) ml, and CTP was 67% sensitive and 87% specific for identification of DWI positive tissue voxels. The benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.

  13. Multiobjective optimization of temporal processes.

    PubMed

    Song, Zhe; Kusiak, Andrew

    2010-06-01

    This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework.

  14. Three-dimensional geoelectric modelling with optimal work/accuracy rate using an adaptive wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.

    2010-08-01

    Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  15. Parallel Mutual Information Based Construction of Genome-Scale Networks on the Intel® Xeon Phi™ Coprocessor.

    PubMed

    Misra, Sanchit; Pamnany, Kiran; Aluru, Srinivas

    2015-01-01

    Construction of whole-genome networks from large-scale gene expression data is an important problem in systems biology. While several techniques have been developed, most cannot handle network reconstruction at the whole-genome scale, and the few that can, require large clusters. In this paper, we present a solution on the Intel Xeon Phi coprocessor, taking advantage of its multi-level parallelism including many x86-based cores, multiple threads per core, and vector processing units. We also present a solution on the Intel® Xeon® processor. Our solution is based on TINGe, a fast parallel network reconstruction technique that uses mutual information and permutation testing for assessing statistical significance. We demonstrate the first ever inference of a plant whole genome regulatory network on a single chip by constructing a 15,575 gene network of the plant Arabidopsis thaliana from 3,137 microarray experiments in only 22 minutes. In addition, our optimization for parallelizing mutual information computation on the Intel Xeon Phi coprocessor holds out lessons that are applicable to other domains.

  16. Chemistry in CESM-SE: Evaluation, Performance and Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamarque, Jean-Francois; Conley, Andrew; Vitt, Francis

    2016-01-06

    The purpose of the proposed work focused on development of chemistry representation within the Spectral Element (SE) dynamical core as implemented in the Community Earth System Model (CESM). More specifically, a main focus was on the ability of SE to accurately represent tracer transport. The proposed approach was to incrementally increase the complexity of the problem, starting from specified two-dimensional flow and tracers to simulations using specified dynamics and full chemistry. As demonstrated below, we have successfully studied all aspects of the proposed work, although only part of the work has been published in the refereed literature so far. Furthermore,more » because the SE dynamical core has been found to have several deficiencies that are still being investigated for solution, not all proposed tasks were finalized. In addition to the tests for SE performance, in an effort to decrease the computational burden of interactive chemistry, especially in the case of a large number of chemical species and chemical reactions, development on a faster chemical solver and implementation on GPUs has been implemented in CESM under the leadership of John Drake (U. Tennessee).« less

  17. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  18. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  19. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  20. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  1. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  2. Core Today! Rationale and Implications. Revised Edition.

    ERIC Educational Resources Information Center

    Vars, Gordon, Ed.; Larson, Craig, Ed.

    This pamphlet is designed to help educators apply the core concept to current problems and situations in educational settings. The preface establishes the position of the National Association for Core Curriculum. A definition of the core curriculum concept is stated in the introduction. Ten assumptions and beliefs on which the core concept is…

  3. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  4. Optimality conditions for the numerical solution of optimization problems with PDE constraints :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro; Ridzal, Denis

    2014-03-01

    A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.

  5. FRANOPP: Framework for analysis and optimization problems user's guide

    NASA Technical Reports Server (NTRS)

    Riley, K. M.

    1981-01-01

    Framework for analysis and optimization problems (FRANOPP) is a software aid for the study and solution of design (optimization) problems which provides the driving program and plotting capability for a user generated programming system. In addition to FRANOPP, the programming system also contains the optimization code CONMIN, and two user supplied codes, one for analysis and one for output. With FRANOPP the user is provided with five options for studying a design problem. Three of the options utilize the plot capability and present an indepth study of the design problem. The study can be focused on a history of the optimization process or on the interaction of variables within the design problem.

  6. Foam Core Shielding for Spacecraft

    NASA Technical Reports Server (NTRS)

    Adams, Marc

    2007-01-01

    A foam core shield (FCS) system is now being developed to supplant multilayer insulation (MLI) systems heretofore installed on spacecraft for thermal management and protection against meteoroid impacts. A typical FCS system consists of a core sandwiched between a face sheet and a back sheet. The core can consist of any of a variety of low-to-medium-density polymeric or inorganic foams chosen to satisfy application-specific requirements regarding heat transfer and temperature. The face sheet serves to shock and thereby shatter incident meteoroids, and is coated on its outer surface to optimize its absorptance and emittance for regulation of temperature. The back sheet can be dimpled to minimize undesired thermal contact with the underlying spacecraft component and can be metallized on the surface facing the component to optimize its absorptance and emittance. The FCS systems can perform better than do MLI systems, at lower mass and lower cost and with greater volumetric efficiency.

  7. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    NASA Astrophysics Data System (ADS)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  8. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  9. Discovery and Mechanistic Study of Benzamide Derivatives That Modulate Hepatitis B Virus Capsid Assembly.

    PubMed

    Wu, Shuo; Zhao, Qiong; Zhang, Pinghu; Kulp, John; Hu, Lydia; Hwang, Nicky; Zhang, Jiming; Block, Timothy M; Xu, Xiaodong; Du, Yanming; Chang, Jinhong; Guo, Ju-Tao

    2017-08-15

    Chronic hepatitis B virus (HBV) infection is a global public health problem. Although the currently approved medications can reliably reduce the viral load and prevent the progression of liver diseases, they fail to cure the viral infection. In an effort toward discovery of novel antiviral agents against HBV, a group of benzamide (BA) derivatives that significantly reduced the amount of cytoplasmic HBV DNA were discovered. The initial lead optimization efforts identified two BA derivatives with improved antiviral activity for further mechanistic studies. Interestingly, similar to our previously reported sulfamoylbenzamides (SBAs), the BAs promote the formation of empty capsids through specific interaction with HBV core protein but not other viral and host cellular components. Genetic evidence suggested that both SBAs and BAs inhibited HBV nucleocapsid assembly by binding to the heteroaryldihydropyrimidine (HAP) pocket between core protein dimer-dimer interfaces. However, unlike SBAs, BA compounds uniquely induced the formation of empty capsids that migrated more slowly in native agarose gel electrophoresis from A36V mutant than from the wild-type core protein. Moreover, we showed that the assembly of chimeric capsids from wild-type and drug-resistant core proteins was susceptible to multiple capsid assembly modulators. Hence, HBV core protein is a dominant antiviral target that may suppress the selection of drug-resistant viruses during core protein-targeting antiviral therapy. Our studies thus indicate that BAs are a chemically and mechanistically unique type of HBV capsid assembly modulators and warranted for further development as antiviral agents against HBV. IMPORTANCE HBV core protein plays essential roles in many steps of the viral replication cycle. In addition to packaging viral pregenomic RNA (pgRNA) and DNA polymerase complex into nucleocapsids for reverse transcriptional DNA replication to take place, the core protein dimers, existing in several different quaternary structures in infected hepatocytes, participate in and regulate HBV virion assembly, capsid uncoating, and covalently closed circular DNA (cccDNA) formation. It is anticipated that small molecular core protein assembly modulators may disrupt one or multiple steps of HBV replication, depending on their interaction with the distinct quaternary structures of core protein. The discovery of novel core protein-targeting antivirals, such as benzamide derivatives reported here, and investigation of their antiviral mechanism may lead to the identification of antiviral therapeutics for the cure of chronic hepatitis B. Copyright © 2017 American Society for Microbiology.

  10. Evolutionary optimization of biopolymers and sequence structure maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reidys, C.M.; Kopp, S.; Schuster, P.

    1996-06-01

    Searching for biopolymers having a predefined function is a core problem of biotechnology, biochemistry and pharmacy. On the level of RNA sequences and their corresponding secondary structures we show that this problem can be analyzed mathematically. The strategy will be to study the properties of the RNA sequence to secondary structure mapping that is essential for the understanding of the search process. We show that to each secondary structure s there exists a neutral network consisting of all sequences folding into s. This network can be modeled as a random graph and has the following generic properties: it is densemore » and has a giant component within the graph of compatible sequences. The neutral network percolates sequence space and any two neutral nets come close in terms of Hamming distance. We investigate the distribution of the orders of neutral nets and show that above a certain threshold the topology of neutral nets allows to find practically all frequent secondary structures.« less

  11. Wireless Sensor Network Optimization: Multi-Objective Paradigm.

    PubMed

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-07-20

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.

  12. From play to problem solving to Common Core: The development of fluid reasoning.

    PubMed

    Prince, Pauline

    2017-01-01

    How and when does fluid reasoning develop and what does it look like at different ages, from a neurodevelopmental and functional perspective? The goal of this article is to discuss the development of fluid reasoning from a practical perspective of our children's lives: from play to problem solving to Common Core Curriculum. A review of relevant and current literature supports a connection between movement, including movement through free play, and the development of novel problem solving. As our children grow and develop, motor routines can become cognitive routines and can be evidenced not only in games, such as chess, but also in the acquisition and demonstration of academic skills. Finally, this article describes the connection between novel problem solving and the demands of the Common Core Curriculum.

  13. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  14. A Kind of Nonlinear Programming Problem Based on Mixed Fuzzy Relation Equations Constraints

    NASA Astrophysics Data System (ADS)

    Li, Jinquan; Feng, Shuang; Mi, Honghai

    In this work, a kind of nonlinear programming problem with non-differential objective function and under the constraints expressed by a system of mixed fuzzy relation equations is investigated. First, some properties of this kind of optimization problem are obtained. Then, a polynomial-time algorithm for this kind of optimization problem is proposed based on these properties. Furthermore, we show that this algorithm is optimal for the considered optimization problem in this paper. Finally, numerical examples are provided to illustrate our algorithms.

  15. [Agricultural eco-economic system coupling in Zhifanggou watershed in hilly-gully region of Loess Plateau].

    PubMed

    Wang, Ji-Jun

    2009-11-01

    Agricultural eco-economic system coupling is an organic unit formed by the inherent interaction between agricultural ecosystem and economic system, and regulated and controlled by mankind moderate interference. Its status can be expressed by the circular chain-net structure of agricultural resources and agricultural industry. The agricultural eco-economic system in Zhifanggou watershed has gone through the process of system coupling, system conflict, system coupling, and partial conflict in high leverage, which is caused by the farmers' requirement and the state's macro-policy, economic means, and administrative means. To cope with the problems of agricultural eco-economics system coupling in Zhifanggou watershed, the optimal coupling model should be established, with tree-grass resources and related industries as the core.

  16. Determining Training Device Requirements in Army Aviation Systems

    NASA Technical Reports Server (NTRS)

    Poumade, M. L.

    1984-01-01

    A decision making methodology which applies the systems approach to the training problem is discussed. Training is viewed as a total system instead of a collection of individual devices and unrelated techniques. The core of the methodology is the use of optimization techniques such as the transportation algorithm and multiobjective goal programming with training task and training device specific data. The role of computers, especially automated data bases and computer simulation models, in the development of training programs is also discussed. The approach can provide significant training enhancement and cost savings over the more traditional, intuitive form of training development and device requirements process. While given from an aviation perspective, the methodology is equally applicable to other training development efforts.

  17. Radiation-tolerant microprocessors in Japanese scientific space vehicles: how to maximize the benefits of commercial SOI technologies

    NASA Astrophysics Data System (ADS)

    Kobayashi, Daisuke; Hirose, Kazuyuki; Saito, Hirobumi

    2013-05-01

    Development of semiconductor devices not only for harsh radiation environments such as space but also for ground-based applications now faces a major hurdle of radiation problems. Necessary is protecting chips from malfunctions due to sub-nanosecond transient noises induced by radiation. As a protection technique using the silicon-on-insulator structure is often suggested, but the use in fact requires devices and circuits carefully optimized for maximizing its benefits. Mainly describing theoretical and experimental characterization of the transient effects, this paper presents a comprehensive study on radiation responses of commercial silicon-on- insulator technologies, which study results in a space-use low-power system-on-chip with a 100-MIPS RISC-based core.

  18. Construction and Application of a Refined Hospital Management Chain.

    PubMed

    Lihua, Yi

    2016-01-01

    Large scale development was quite common in the later period of hospital industrialization in China. Today, Chinese hospital management faces such problems as service inefficiency, high human resources cost, and low rate of capital use. This study analyzes the refined management chain of Wuxi No.2 People's Hospital. This consists of six gears namely, "organizational structure, clinical practice, outpatient service, medical technology, and nursing care and logistics." The gears are based on "flat management system targets, chief of medical staff, centralized outpatient service, intensified medical examinations, vertical nursing management and socialized logistics." The core concepts of refined hospital management are optimizing flow process, reducing waste, improving efficiency, saving costs, and taking good care of patients as most important. Keywords: Hospital, Refined, Management chain

  19. Optimization of rotor shaft shrink fit method for motor using "Robust design"

    NASA Astrophysics Data System (ADS)

    Toma, Eiji

    2018-01-01

    This research is collaborative investigation with the general-purpose motor manufacturer. To review construction method in production process, we applied the parameter design method of quality engineering and tried to approach the optimization of construction method. Conventionally, press-fitting method has been adopted in process of fitting rotor core and shaft which is main component of motor, but quality defects such as core shaft deflection occurred at the time of press fitting. In this research, as a result of optimization design of "shrink fitting method by high-frequency induction heating" devised as a new construction method, its construction method was feasible, and it was possible to extract the optimum processing condition.

  20. Optimization design of turbo-expander gas bearing for a 500W helium refrigerator

    NASA Astrophysics Data System (ADS)

    Li, S. S.; Fu, B.; Y Zhang, Q.

    2017-12-01

    Turbo-expander is the core machinery of the helium refrigerator. Bearing as the supporting element is the core technology to impact the design of turbo-expander. The perfect design and performance study for the gas bearing are essential to ensure the stability of turbo-expander. In this paper, numerical simulation is used to analyze the performance of gas bearing for a 500W helium refrigerator turbine, and the optimization design of the gas bearing has been completed. And the results of the gas bearing optimization have a guiding role in the processing technology. Finally, the turbine experiments verify that the gas bearing has good performance, and ensure the stable operation of the turbine.

  1. Acceleration of the Particle Swarm Optimization for Peierls–Nabarro modeling of dislocations in conventional and high-entropy alloys

    DOE PAGES

    Pei, Zongrui; Max-Planck-Inst. fur Eisenforschung, Duseldorf; Eisenbach, Markus

    2017-02-06

    Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), themore » local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.« less

  2. Reconstruction of the unknown optimization cost functions from experimental recordings during static multi-finger prehension

    PubMed Central

    Niu, Xun; Terekhov, Alexander V.; Latash, Mark L.; Zatsiorsky, Vladimir M.

    2013-01-01

    The goal of the research is to reconstruct the unknown cost (objective) function(s) presumably used by the neural controller for sharing the total force among individual fingers in multi-finger prehension. The cost function was determined from experimental data by applying the recently developed Analytical Inverse Optimization (ANIO) method (Terekhov et al 2010). The core of the ANIO method is the Theorem of Uniqueness that specifies conditions for unique (with some restrictions) estimation of the objective functions. In the experiment, subjects (n=8) grasped an instrumented handle and maintained it at rest in the air with various external torques, loads, and target grasping forces applied to the object. The experimental data recorded from 80 trials showed a tendency to lie on a 2-dimensional hyperplane in the 4-dimensional finger-force space. Because the constraints in each trial were different, such a propensity is a manifestation of a neural mechanism (not the task mechanics). In agreement with the Lagrange principle for the inverse optimization, the plane of experimental observations was close to the plane resulting from the direct optimization. The latter plane was determined using the ANIO method. The unknown cost function was reconstructed successfully for each performer, as well as for the group data. The cost functions were found to be quadratic with non-zero linear terms. The cost functions obtained with the ANIO method yielded more accurate results than other optimization methods. The ANIO method has an evident potential for addressing the problem of optimization in motor control. PMID:22104742

  3. SOPRA: Scaffolding algorithm for paired reads via statistical optimization.

    PubMed

    Dayarian, Adel; Michael, Todd P; Sengupta, Anirvan M

    2010-06-24

    High throughput sequencing (HTS) platforms produce gigabases of short read (<100 bp) data per run. While these short reads are adequate for resequencing applications, de novo assembly of moderate size genomes from such reads remains a significant challenge. These limitations could be partially overcome by utilizing mate pair technology, which provides pairs of short reads separated by a known distance along the genome. We have developed SOPRA, a tool designed to exploit the mate pair/paired-end information for assembly of short reads. The main focus of the algorithm is selecting a sufficiently large subset of simultaneously satisfiable mate pair constraints to achieve a balance between the size and the quality of the output scaffolds. Scaffold assembly is presented as an optimization problem for variables associated with vertices and with edges of the contig connectivity graph. Vertices of this graph are individual contigs with edges drawn between contigs connected by mate pairs. Similar graph problems have been invoked in the context of shotgun sequencing and scaffold building for previous generation of sequencing projects. However, given the error-prone nature of HTS data and the fundamental limitations from the shortness of the reads, the ad hoc greedy algorithms used in the earlier studies are likely to lead to poor quality results in the current context. SOPRA circumvents this problem by treating all the constraints on equal footing for solving the optimization problem, the solution itself indicating the problematic constraints (chimeric/repetitive contigs, etc.) to be removed. The process of solving and removing of constraints is iterated till one reaches a core set of consistent constraints. For SOLiD sequencer data, SOPRA uses a dynamic programming approach to robustly translate the color-space assembly to base-space. For assessing the quality of an assembly, we report the no-match/mismatch error rate as well as the rates of various rearrangement errors. Applying SOPRA to real data from bacterial genomes, we were able to assemble contigs into scaffolds of significant length (N50 up to 200 Kb) with very few errors introduced in the process. In general, the methodology presented here will allow better scaffold assemblies of any type of mate pair sequencing data.

  4. Fast Optimization for Aircraft Descent and Approach Trajectory

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John

    2017-01-01

    We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.

  5. Research on cutting path optimization of sheet metal parts based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.

  6. Optimizing zonal advection of the Advanced Research WRF (ARW) dynamics for Intel MIC

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecast (WRF) model is the most widely used community weather forecast and research model in the world. There are two distinct varieties of WRF. The Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we will use Intel Intel Many Integrated Core (MIC) architecture to substantially increase the performance of a zonal advection subroutine for optimization. It is of the most time consuming routines in the ARW dynamics core. Advection advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 2.4x.

  7. A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniquesmore » in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.« less

  8. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  9. Application of the gravity search algorithm to multi-reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.

    2016-12-01

    Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.

  10. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  11. Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu

    This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.

  12. Optimization of Benzoisothiazole dioxide inhibitory activity of the NS5B polymerase of HCV genotype 4 using ligand-steered homological modeling, reaction-driven scaffold-hopping and Enovo workflow.

    PubMed

    Mahmoud, Amr Hamed; Mohamed Abouzid, Khaled Abouzid; El Ella, Dalal Abd El Rahman Abou; Hamid Ismail, Mohamed Abdel

    2011-01-01

    Infection caused by hepatitis C virus (HCV) is a significant world health problem for which novel therapies are in urgent demand. The virus is highly prevalent in the Middle East and Africa particularly Egypt with more than 90% of infections due to genotype 4. Nonstructural (NS5B) viral proteins have emerged as an attractive target for HCV antivirals discovery. A potent class of inhibitors having benzisothiazole dioxide scaffold has been identified on this target, however they were mainly active on genotype 1 while exhibiting much lowered activity on other genotypes due to the high degree of mutation of its binding site. Based on this fact, we employed a novel strategy to optimize this class on genotype 4. This strategy depends on using a refined ligand-steered homological model of this genotype to study the mutation binding energies of the binding site amino acid residues, the essential features for interaction and provide a structure-based pharmacophore model that can aid optimization. This model was applied on a focused library which was generated using a reaction-driven scaffold-hopping strategy. The hits retrieved were subjected to Enovo pipeline pilot optimization workflow that employs R-group enumeration, core-constrained protein docking using modified CDOCKER and finally ranking of poses using an accurate molecular mechanics generalized Born with surface area method.

  13. Optimization of deformation monitoring networks using finite element strain analysis

    NASA Astrophysics Data System (ADS)

    Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.

    2018-04-01

    An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.

  14. Constraint Optimization Literature Review

    DTIC Science & Technology

    2015-11-01

    COPs. 15. SUBJECT TERMS high-performance computing, mobile ad hoc network, optimization, constraint, satisfaction 16. SECURITY CLASSIFICATION OF: 17...Optimization Problems 1 2.1 Constraint Satisfaction Problems 1 2.2 Constraint Optimization Problems 3 3. Constraint Optimization Algorithms 9 3.1...Constraint Satisfaction Algorithms 9 3.1.1 Brute-Force search 9 3.1.2 Constraint Propagation 10 3.1.3 Depth-First Search 13 3.1.4 Local Search 18

  15. Exploring performance and energy tradeoffs for irregular applications: A case study on the Tilera many-core architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.

    High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structuredmore » locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.« less

  16. Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks

    Treesearch

    Bistra Dilkina; Rachel Houtman; Carla P. Gomes; Claire A. Montgomery; Kevin S. McKelvey; Katherine Kendall; Tabitha A. Graves; Richard Bernstein; Michael K. Schwartz

    2016-01-01

    Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a...

  17. Structural Analysis and Optimization of a Composite Fan Blade for Future Aircraft Engine

    NASA Astrophysics Data System (ADS)

    Coroneos, Rula M.; Gorla, Rama Subba Reddy

    2012-09-01

    This paper addresses the structural analysis and optimization of a composite sandwich ply lay-up of a NASA baseline solid metallic fan blade comparable to a future Boeing 737 MAX aircraft engine. Sandwich construction with a polymer matrix composite face sheet and honeycomb aluminum core replaces the original baseline solid metallic fan model made of Titanium. The focus of this work is to design the sandwich composite blade with the optimum number of plies for the face sheet that will withstand the combined pressure and centrifugal loads while the constraints are satisfied and the baseline aerodynamic and geometric parameters are maintained. To satisfy the requirements a sandwich construction for the blade is proposed with composite face sheets and a weak core made of honeycomb aluminum material. For aerodynamic considerations, the thickness of the core is optimized where as the overall blade thickness is held fixed in order not to alter the original airfoil geometry. Weight reduction is taken as the objective function by varying the core thickness of the blade within specified upper and lower bounds. Constraints are imposed on radial displacement limitations and ply failure strength. From the optimum design, the minimum number of plies, which will not fail, is back-calculated. The ply lay-up of the blade is adjusted from the calculated number of plies and final structural analysis is performed. Analyses were carried out by utilizing the OpenMDAO Framework, developed at NASA Glenn Research Center combining optimization with structural assessment.

  18. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  19. Design of an integral thermal protection system for future space vehicles

    NASA Astrophysics Data System (ADS)

    Bapanapalli, Satish Kumar

    Thermal protection systems (TPS) are the features incorporated into a spacecraft's design to protect it from severe aerodynamic heating during high-speed travel through planetary atmospheres. The ablative TPS on the space capsule Apollo and ceramic tiles and blankets on the Space Shuttle Orbiter were designed as add-ons to the main load-bearing structure of the vehicles. They are usually incompatible with the structure due to mismatch in coefficient of thermal expansion and as a result the robustness of the external surface of the spacecraft is compromised. This could potentially lead to catastrophic consequences because the TPS forms the external surface of the vehicle and is subjected to numerous other loads like aerodynamic pressure loads, small object high-speed impacts and handling damages during maintenance. In order to make the spacecraft external surface robust, an Integral Thermal Protection System (ITPS) concept has been proposed in this research in which the load-bearing structure and the TPS are combined into one single structure. The design of an ITPS is a formidable task because the requirement of a load-bearing structure and a TPS are often contradictory to one another. The design process has been formulated as an optimization problem with mass per unit area of the ITPS as the objective function and the various functions of the ITPS were formulated as constraints. This is a multidisciplinary design optimization problem involving heat transfer and structural analysis fields. The constraints were expressed as response surface approximations obtained from a large number of finite element analyses, which were carried out with combinations of design variables obtained from an optimized Latin-Hypercube sampling scheme. A MATLABRTM code has been developed to carry out these FE analyses automatically in conjunction with ABAQUSRTM . Corrugated-core structures were designed for ITPS applications with loads and boundary conditions similar to that of a Space Shuttle-like vehicle. Temperature, buckling, deflection and stress constraints were considered for the design process. An optimized mass ranging between 3.5--5 lb/ft2 was achieved by the design. This is considerably heavier when compared to conventional TPS designs. However, the ITPS can withstand substantially large mechanical loads when compared to the conventional TPS. Truss-core geometries used for ITPS design in this research were found to be unsuitable as they could not withstand large thermal gradients frequently encountered in ITPS applications. The corrugated-core design was used for further studying the influence of the various input parameters on the final design weight of the ITPS. It was observed that boundary conditions not only significantly influence the ITPS design but also have a major impact on the effect of various input parameters. It was found that even a small amount of heat loss from bottom face sheet leads to significant reduction in ITPS weight. Aluminum and Beryllium are the most suitable materials for bottom face sheet with Beryllium having considerable advantages in terms of heat capacity, stiffness and density. Although ceramic matrix composites have many superior properties when compared to metal alloys (Titanium alloys and Inconel), their low tensile strength presents difficulties in ITPS applications.

  20. The comparison of numerical models of a sandwich panel in the context of the core deformations at the supports

    NASA Astrophysics Data System (ADS)

    Pozorska, Jolanta; Pozorski, Zbigniew

    2018-01-01

    The paper presents the problem of static structural behavior of sandwich panels at the supports. The panels have a soft core and correspond to typical structures applied in civil engineering. To analyze the problem, five different 3-D numerical models were created. The results were compared in the context of core compression and stress redistribution. The numerical solutions verify methods of evaluating the capacity of the sandwich panel that are known from the literature.

  1. Multiobjective optimization approach: thermal food processing.

    PubMed

    Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R

    2009-01-01

    The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field.

  2. Constructing probabilistic scenarios for wide-area solar power generation

    DOE PAGES

    Woodruff, David L.; Deride, Julio; Staid, Andrea; ...

    2017-12-22

    Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions.more » Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.« less

  3. Parallelization of TWOPORFLOW, a Cartesian Grid based Two-phase Porous Media Code for Transient Thermo-hydraulic Simulations

    NASA Astrophysics Data System (ADS)

    Trost, Nico; Jiménez, Javier; Imke, Uwe; Sanchez, Victor

    2014-06-01

    TWOPORFLOW is a thermo-hydraulic code based on a porous media approach to simulate single- and two-phase flow including boiling. It is under development at the Institute for Neutron Physics and Reactor Technology (INR) at KIT. The code features a 3D transient solution of the mass, momentum and energy conservation equations for two inter-penetrating fluids with a semi-implicit continuous Eulerian type solver. The application domain of TWOPORFLOW includes the flow in standard porous media and in structured porous media such as micro-channels and cores of nuclear power plants. In the latter case, the fluid domain is coupled to a fuel rod model, describing the heat flow inside the solid structure. In this work, detailed profiling tools have been utilized to determine the optimization potential of TWOPORFLOW. As a result, bottle-necks were identified and reduced in the most feasible way, leading for instance to an optimization of the water-steam property computation. Furthermore, an OpenMP implementation addressing the routines in charge of inter-phase momentum-, energy- and mass-coupling delivered good performance together with a high scalability on shared memory architectures. In contrast to that, the approach for distributed memory systems was to solve sub-problems resulting by the decomposition of the initial Cartesian geometry. Thread communication for the sub-problem boundary updates was accomplished by the Message Passing Interface (MPI) standard.

  4. Constructing probabilistic scenarios for wide-area solar power generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodruff, David L.; Deride, Julio; Staid, Andrea

    Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions.more » Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.« less

  5. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  6. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  7. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  8. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  9. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.

  10. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  11. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  12. Candidate molten salt investigation for an accelerator driven subcritical core

    NASA Astrophysics Data System (ADS)

    Sooby, E.; Baty, A.; Beneš, O.; McIntyre, P.; Pogue, N.; Salanne, M.; Sattarov, A.

    2013-09-01

    We report a design for accelerator-driven subcritical fission in a molten salt core (ADSMS) that utilizes a fuel salt composed of NaCl and transuranic (TRU) chlorides. The ADSMS core is designed for fast neutronics (28% of neutrons >1 MeV) to optimize TRU destruction. The choice of a NaCl-based salt offers benefits for corrosion, operating temperature, and actinide solubility as compared with LiF-based fuel salts. A molecular dynamics (MD) code has been used to estimate properties of the molten salt system which are important for ADSMS design but have never been measured experimentally. Results from the MD studies are reported. Experimental measurements of fuel salt properties and studies of corrosion and radiation damage on candidate metals for the core vessel are anticipated. A special thanks is due to Prof. Paul Madden for introducing the ADSMS group to the concept of using the molten salt as the spallation target, rather than a conventional heavy metal spallation target. This feature helps to optimize this core as a Pu/TRU burner.

  13. Elastic stability of cylindrical shells with soft elastic cores: Biomimicking natural tubular structures

    NASA Astrophysics Data System (ADS)

    Karam, Gebran Nizar

    1994-01-01

    Thin walled cylindrical shell structures are widespread in nature: examples include plant stems, porcupine quills, and hedgehog spines. All have an outer shell of almost fully dense material supported by a low density, cellular core. In nature, all are loaded in combination of axial compression and bending: failure is typically by buckling. Natural structures are often optimized. Here we have analyzed the elastic buckling of a thin cylindrical shell supported by an elastic core to show that this structural configuration achieves significant weight saving over a hollow cylinder. The results of the analysis are compared with data from an extensive experimental program on uniaxial compression and four point bending tests on silicone rubber shells with and without compliant foam cores. The analysis describes the results of the mechanical tests well. Characterization of the microstructures of several natural tubular structures with foamlike cores (plant stems, quills, and spines) revealed them to be close to the optimal configurations predicted by the analytical model. Biomimicking of natural cylindrical shell structures and evolutionary design processes may offer the potential to increase the mechanical efficiency of engineering cylindrical shells.

  14. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    PubMed Central

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  15. Cost component analysis.

    PubMed

    Lörincz, András; Póczos, Barnabás

    2003-06-01

    In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.

  16. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  17. An introductory pharmacy practice experience based on a medication therapy management service model.

    PubMed

    Agness, Chanel F; Huynh, Donna; Brandt, Nicole

    2011-06-10

    To implement and evaluate an introductory pharmacy practice experience (IPPE) based on the medication therapy management (MTM) service model. Patient Care 2 is an IPPE that introduces third-year pharmacy students to the MTM service model. Students interacted with older adults to identify medication-related problems and develop recommendations using core MTM elements. Course outcome evaluations were based on number of documented medication-related problems, recommendations, and student reviews. Fifty-seven older adults participated in the course. Students identified 52 medication-related problems and 66 medical problems, and documented 233 recommendations relating to health maintenance and wellness, pharmacotherapy, referrals, and education. Students reported having adequate experience performing core MTM elements. Patient Care 2 may serve as an experiential learning model for pharmacy schools to teach the core elements of MTM and provide patient care services to the community.

  18. OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation

    NASA Astrophysics Data System (ADS)

    Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun

    2017-11-01

    We present Open Multi-Processing (OpenMP) version of Fortran 90 programs for solving the Gross-Pitaevskii (GP) equation for a Bose-Einstein condensate in one, two, and three spatial dimensions, optimized for use with GNU and Intel compilers. We use the split-step Crank-Nicolson algorithm for imaginary- and real-time propagation, which enables efficient calculation of stationary and non-stationary solutions, respectively. The present OpenMP programs are designed for computers with multi-core processors and optimized for compiling with both commercially-licensed Intel Fortran and popular free open-source GNU Fortran compiler. The programs are easy to use and are elaborated with helpful comments for the users. All input parameters are listed at the beginning of each program. Different output files provide physical quantities such as energy, chemical potential, root-mean-square sizes, densities, etc. We also present speedup test results for new versions of the programs. Program files doi:http://dx.doi.org/10.17632/y8zk3jgn84.2 Licensing provisions: Apache License 2.0 Programming language: OpenMP GNU and Intel Fortran 90. Computer: Any multi-core personal computer or workstation with the appropriate OpenMP-capable Fortran compiler installed. Number of processors used: All available CPU cores on the executing computer. Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1888; ibid.204 (2016) 209. Does the new version supersede the previous version?: Not completely. It does supersede previous Fortran programs from both references above, but not OpenMP C programs from Comput. Phys. Commun. 204 (2016) 209. Nature of problem: The present Open Multi-Processing (OpenMP) Fortran programs, optimized for use with commercially-licensed Intel Fortran and free open-source GNU Fortran compilers, solve the time-dependent nonlinear partial differential (GP) equation for a trapped Bose-Einstein condensate in one (1d), two (2d), and three (3d) spatial dimensions for six different trap symmetries: axially and radially symmetric traps in 3d, circularly symmetric traps in 2d, fully isotropic (spherically symmetric) and fully anisotropic traps in 2d and 3d, as well as 1d traps, where no spatial symmetry is considered. Solution method: We employ the split-step Crank-Nicolson algorithm to discretize the time-dependent GP equation in space and time. The discretized equation is then solved by imaginary- or real-time propagation, employing adequately small space and time steps, to yield the solution of stationary and non-stationary problems, respectively. Reasons for the new version: Previously published Fortran programs [1,2] have now become popular tools [3] for solving the GP equation. These programs have been translated to the C programming language [4] and later extended to the more complex scenario of dipolar atoms [5]. Now virtually all computers have multi-core processors and some have motherboards with more than one physical computer processing unit (CPU), which may increase the number of available CPU cores on a single computer to several tens. The C programs have been adopted to be very fast on such multi-core modern computers using general-purpose graphic processing units (GPGPU) with Nvidia CUDA and computer clusters using Message Passing Interface (MPI) [6]. Nevertheless, previously developed Fortran programs are also commonly used for scientific computation and most of them use a single CPU core at a time in modern multi-core laptops, desktops, and workstations. Unless the Fortran programs are made aware and capable of making efficient use of the available CPU cores, the solution of even a realistic dynamical 1d problem, not to mention the more complicated 2d and 3d problems, could be time consuming using the Fortran programs. Previously, we published auto-parallel Fortran programs [2] suitable for Intel (but not GNU) compiler for solving the GP equation. Hence, a need for the full OpenMP version of the Fortran programs to reduce the execution time cannot be overemphasized. To address this issue, we provide here such OpenMP Fortran programs, optimized for both Intel and GNU Fortran compilers and capable of using all available CPU cores, which can significantly reduce the execution time. Summary of revisions: Previous Fortran programs [1] for solving the time-dependent GP equation in 1d, 2d, and 3d with different trap symmetries have been parallelized using the OpenMP interface to reduce the execution time on multi-core processors. There are six different trap symmetries considered, resulting in six programs for imaginary-time propagation and six for real-time propagation, totaling to 12 programs included in BEC-GP-OMP-FOR software package. All input data (number of atoms, scattering length, harmonic oscillator trap length, trap anisotropy, etc.) are conveniently placed at the beginning of each program, as before [2]. Present programs introduce a new input parameter, which is designated by Number_of_Threads and defines the number of CPU cores of the processor to be used in the calculation. If one sets the value 0 for this parameter, all available CPU cores will be used. For the most efficient calculation it is advisable to leave one CPU core unused for the background system's jobs. For example, on a machine with 20 CPU cores such that we used for testing, it is advisable to use up to 19 CPU cores. However, the total number of used CPU cores can be divided into more than one job. For instance, one can run three simulations simultaneously using 10, 4, and 5 CPU cores, respectively, thus totaling to 19 used CPU cores on a 20-core computer. The Fortran source programs are located in the directory src, and can be compiled by the make command using the makefile in the root directory BEC-GP-OMP-FOR of the software package. The examples of produced output files can be found in the directory output, although some large density files are omitted, to save space. The programs calculate the values of actually used dimensionless nonlinearities from the physical input parameters, where the input parameters correspond to the identical nonlinearity values as in the previously published programs [1], so that the output files of the old and new programs can be directly compared. The output files are conveniently named such that their contents can be easily identified, following the naming convention introduced in Ref. [2]. For example, a file named -out.txt, where is a name of the individual program, represents the general output file containing input data, time and space steps, nonlinearity, energy and chemical potential, and was named fort.7 in the old Fortran version of programs [1]. A file named -den.txt is the output file with the condensate density, which had the names fort.3 and fort.4 in the old Fortran version [1] for imaginary- and real-time propagation programs, respectively. Other possible density outputs, such as the initial density, are commented out in the programs to have a simpler set of output files, but users can uncomment and re-enable them, if needed. In addition, there are output files for reduced (integrated) 1d and 2d densities for different programs. In the real-time programs there is also an output file reporting the dynamics of evolution of root-mean-square sizes after a perturbation is introduced. The supplied real-time programs solve the stationary GP equation, and then calculate the dynamics. As the imaginary-time programs are more accurate than the real-time programs for the solution of a stationary problem, one can first solve the stationary problem using the imaginary-time programs, adapt the real-time programs to read the pre-calculated wave function and then study the dynamics. In that case the parameter NSTP in the real-time programs should be set to zero and the space mesh and nonlinearity parameters should be identical in both programs. The reader is advised to consult our previous publication where a complete description of the output files is given [2]. A readme.txt file, included in the root directory, explains the procedure to compile and run the programs. We tested our programs on a workstation with two 10-core Intel Xeon E5-2650 v3 CPUs. The parameters used for testing are given in sample input files, provided in the corresponding directory together with the programs. In Table 1 we present wall-clock execution times for runs on 1, 6, and 19 CPU cores for programs compiled using Intel and GNU Fortran compilers. The corresponding columns "Intel speedup" and "GNU speedup" give the ratio of wall-clock execution times of runs on 1 and 19 CPU cores, and denote the actual measured speedup for 19 CPU cores. In all cases and for all numbers of CPU cores, although the GNU Fortran compiler gives excellent results, the Intel Fortran compiler turns out to be slightly faster. Note that during these tests we always ran only a single simulation on a workstation at a time, to avoid any possible interference issues. Therefore, the obtained wall-clock times are more reliable than the ones that could be measured with two or more jobs running simultaneously. We also studied the speedup of the programs as a function of the number of CPU cores used. The performance of the Intel and GNU Fortran compilers is illustrated in Fig. 1, where we plot the speedup and actual wall-clock times as functions of the number of CPU cores for 2d and 3d programs. We see that the speedup increases monotonically with the number of CPU cores in all cases and has large values (between 10 and 14 for 3d programs) for the maximal number of cores. This fully justifies the development of OpenMP programs, which enable much faster and more efficient solving of the GP equation. However, a slow saturation in the speedup with the further increase in the number of CPU cores is observed in all cases, as expected. The speedup tends to increase for programs in higher dimensions, as they become more complex and have to process more data. This is why the speedups of the supplied 2d and 3d programs are larger than those of 1d programs. Also, for a single program the speedup increases with the size of the spatial grid, i.e., with the number of spatial discretization points, since this increases the amount of calculations performed by the program. To demonstrate this, we tested the supplied real2d-th program and varied the number of spatial discretization points NX=NY from 20 to 1000. The measured speedup obtained when running this program on 19 CPU cores as a function of the number of discretization points is shown in Fig. 2. The speedup first increases rapidly with the number of discretization points and eventually saturates. Additional comments: Example inputs provided with the programs take less than 30 minutes to run on a workstation with two Intel Xeon E5-2650 v3 processors (2 QPI links, 10 CPU cores, 25 MB cache, 2.3 GHz).

  19. The optimal location of piezoelectric actuators and sensors for vibration control of plates

    NASA Astrophysics Data System (ADS)

    Kumar, K. Ramesh; Narayanan, S.

    2007-12-01

    This paper considers the optimal placement of collocated piezoelectric actuator-sensor pairs on a thin plate using a model-based linear quadratic regulator (LQR) controller. LQR performance is taken as objective for finding the optimal location of sensor-actuator pairs. The problem is formulated using the finite element method (FEM) as multi-input-multi-output (MIMO) model control. The discrete optimal sensor and actuator location problem is formulated in the framework of a zero-one optimization problem. A genetic algorithm (GA) is used to solve the zero-one optimization problem. Different classical control strategies like direct proportional feedback, constant-gain negative velocity feedback and the LQR optimal control scheme are applied to study the control effectiveness.

  20. Tiled architecture of a CNN-mostly IP system

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    2009-05-01

    Multi-core architectures have been popularized with the advent of the IBM CELL. On a finer grain the problems in scheduling multi-cores have already existed in the tiled architectures, such as the EPIC and Da Vinci. It is not easy to evaluate the performance of a schedule on such architecture as historical data are not available. One solution is to compile algorithms for which an optimal schedule is known by analysis. A typical example is an algorithm that is already defined in terms of many collaborating simple nodes, such as a Cellular Neural Network (CNN). A simple node with a local register stack together with a 'rotating wheel' internal communication mechanism has been proposed. Though the basic CNN allows for a tiled implementation of a tiled algorithm on a tiled structure, a practical CNN system will have to disturb this regularity by the additional need for arithmetical and logical operations. Arithmetic operations are needed for instance to accommodate for low-level image processing, while logical operations are needed to fork and merge different data streams without use of the external memory. It is found that the 'rotating wheel' internal communication mechanism still handles such mechanisms without the need for global control. Overall the CNN system provides for a practical network size as implemented on a FPGA, can be easily used as embedded IP and provides a clear benchmark for a multi-core compiler.

  1. Automating the generation of finite element dynamical cores with Firedrake

    NASA Astrophysics Data System (ADS)

    Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas

    2017-04-01

    The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.

  2. Exploring the quantum speed limit with computer games

    NASA Astrophysics Data System (ADS)

    Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.

    2016-04-01

    Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.

  3. Exploring the quantum speed limit with computer games.

    PubMed

    Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F

    2016-04-14

    Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.

  4. Kalman Filter Tracking on Parallel Architectures

    NASA Astrophysics Data System (ADS)

    Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2015-12-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter [2]. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust and are exactly those being used today for the design of the tracking system for HL-LHC. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedup both with Intel Xeon and Xeon Phi. We report here our further progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic simulation setup.

  5. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  6. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  7. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  8. On a distinctive feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets

    NASA Astrophysics Data System (ADS)

    Trifonenkov, A. V.; Trifonenkov, V. P.

    2017-01-01

    This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.

  9. Moats and Drawbridges: An Isolation Primitive for Reconfigurable Hardware Based Systems

    DTIC Science & Technology

    2007-05-01

    these systems, and after being run through an optimizing CAD tool the resulting circuit is a single entangled mess of gates and wires. To prevent the...translates MATLAB [48] algorithms into HDL, logic synthesis translates this HDL into a netlist, a synthesis tool uses a place-and-route algorithm to...Core Soft Core µ Soft P Core µP Core Hard Soft Algorithms MATLAB gcc ExecutableC Code HDL C Code Bitstream Place and Route NetlistLogic Synthesis EDK µP

  10. Robust optimization modelling with applications to industry and environmental problems

    NASA Astrophysics Data System (ADS)

    Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman

    2017-10-01

    Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.

  11. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  12. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  13. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  14. Optimize out-of-core thermionic energy conversion for nuclear electric propulsion

    NASA Technical Reports Server (NTRS)

    Morris, J. F.

    1977-01-01

    Current designs for out of core thermionic energy conversion (TEC) to power nuclear electric propulsion (NEP) were evaluated. Approaches to improve out of core TEC are emphasized and probabilities for success are indicated. TEC gains are available with higher emitter temperatures and greater power densities. Good potentialities for accommodating external high temperature, high power density TEC with heat pipe cooled reactors exist.

  15. An optimized full-configuration-interaction nuclear orbital approach to a ``hard-core'' interaction problem: Application to (3He)N-Cl2(B) clusters (N<=4)

    NASA Astrophysics Data System (ADS)

    de Lara-Castells, M. P.; Villarreal, P.; Delgado-Barrio, G.; Mitrushchenkov, A. O.

    2009-11-01

    An efficient full-configuration-interaction nuclear orbital treatment has been recently developed as a benchmark quantum-chemistry-like method to calculate ground and excited "solvent" energies and wave functions in small doped ΔEest clusters (N ≤4) [M. P. de Lara-Castells, G. Delgado-Barrio, P. Villarreal, and A. O. Mitrushchenkov, J. Chem. Phys. 125, 221101 (2006)]. Additional methodological and computational details of the implementation, which uses an iterative Jacobi-Davidson diagonalization algorithm to properly address the inherent "hard-core" He-He interaction problem, are described here. The convergence of total energies, average pair He-He interaction energies, and relevant one- and two-body properties upon increasing the angular part of the one-particle basis set (expanded in spherical harmonics) has been analyzed, considering Cl2 as the dopant and a semiempirical model (T-shaped) He-Cl2(B) potential. Converged results are used to analyze global energetic and structural aspects as well as the configuration makeup of the wave functions, associated with the ground and low-lying "solvent" excited states. Our study reveals that besides the fermionic nature of H3e atoms, key roles in determining total binding energies and wave-function structures are played by the strong repulsive core of the He-He potential as well as its very weak attractive region, the most stable arrangement somehow departing from the one of N He atoms equally spaced on equatorial "ring" around the dopant. The present results for N =4 fermions indicates the structural "pairing" of two H3e atoms at opposite sides on a broad "belt" around the dopant, executing a sort of asymmetric umbrella motion. This pairing is a compromise between maximizing the H3e-H3e and the He-dopant attractions, and suppressing at the same time the "hard-core" repulsion. Although the He-He attractive interaction is rather weak, its contribution to the total energy is found to scale as a power of three and it thus increasingly affects the pair density distributions as the cluster grows in size.

  16. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  17. A general optimality criteria algorithm for a class of engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Belegundu, Ashok D.

    2015-05-01

    An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.

  18. Property-based design: optimization and characterization of polyvinyl alcohol (PVA) hydrogel and PVA-matrix composite for artificial cornea.

    PubMed

    Jiang, Hong; Zuo, Yi; Zhang, Li; Li, Jidong; Zhang, Aiming; Li, Yubao; Yang, Xiaochao

    2014-03-01

    Each approach for artificial cornea design is toward the same goal: to develop a material that best mimics the important properties of natural cornea. Accordingly, the selection and optimization of corneal substitute should be based on their physicochemical properties. In this study, three types of polyvinyl alcohol (PVA) hydrogels with different polymerization degree (PVA1799, PVA2499 and PVA2699) were prepared by freeze-thawing techniques. After characterization in terms of transparency, water content, water contact angle, mechanical property, root-mean-square roughness and protein adsorption behavior, the optimized PVA2499 hydrogel with similar properties of natural cornea was selected as a matrix material for artificial cornea. Based on this, a biomimetic artificial cornea was fabricated with core-and-skirt structure: a transparent PVA hydrogel core, surrounding by a ringed PVA-matrix composite skirt that composed of graphite, Fe-doped nano hydroxyapatite (n-Fe-HA) and PVA hydrogel. Different ratio of graphite/n-Fe-HA can tune the skirt color from dark brown to light brown, which well simulates the iris color of Oriental eyes. Moreover, morphologic and mechanical examination showed that an integrated core-and-skirt artificial cornea was formed from an interpenetrating polymer network, no phase separation appeared on the interface between the core and the skirt.

  19. Initial results on computational performance of Intel Many Integrated Core (MIC) architecture: implementation of the Weather and Research Forecasting (WRF) Purdue-Lin microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.

  20. Event Reconstruction for Many-core Architectures using Java

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Norman A.; /SLAC

    Although Moore's Law remains technically valid, the performance enhancements in computing which traditionally resulted from increased CPU speeds ended years ago. Chip manufacturers have chosen to increase the number of core CPUs per chip instead of increasing clock speed. Unfortunately, these extra CPUs do not automatically result in improvements in simulation or reconstruction times. To take advantage of this extra computing power requires changing how software is written. Event reconstruction is globally serial, in the sense that raw data has to be unpacked first, channels have to be clustered to produce hits before those hits are identified as belonging tomore » a track or shower, tracks have to be found and fit before they are vertexed, etc. However, many of the individual procedures along the reconstruction chain are intrinsically independent and are perfect candidates for optimization using multi-core architecture. Threading is perhaps the simplest approach to parallelizing a program and Java includes a powerful threading facility built into the language. We have developed a fast and flexible reconstruction package (org.lcsim) written in Java that has been used for numerous physics and detector optimization studies. In this paper we present the results of our studies on optimizing the performance of this toolkit using multiple threads on many-core architectures.« less

  1. A dual communicator and dual grid-resolution algorithm for petascale simulations of turbulent mixing at high Schmidt number

    NASA Astrophysics Data System (ADS)

    Clay, M. P.; Buaria, D.; Gotoh, T.; Yeung, P. K.

    2017-10-01

    A new dual-communicator algorithm with very favorable performance characteristics has been developed for direct numerical simulation (DNS) of turbulent mixing of a passive scalar governed by an advection-diffusion equation. We focus on the regime of high Schmidt number (S c), where because of low molecular diffusivity the grid-resolution requirements for the scalar field are stricter than those for the velocity field by a factor √{ S c }. Computational throughput is improved by simulating the velocity field on a coarse grid of Nv3 points with a Fourier pseudo-spectral (FPS) method, while the passive scalar is simulated on a fine grid of Nθ3 points with a combined compact finite difference (CCD) scheme which computes first and second derivatives at eighth-order accuracy. A static three-dimensional domain decomposition and a parallel solution algorithm for the CCD scheme are used to avoid the heavy communication cost of memory transposes. A kernel is used to evaluate several approaches to optimize the performance of the CCD routines, which account for 60% of the overall simulation cost. On the petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign, scalability is improved substantially with a hybrid MPI-OpenMP approach in which a dedicated thread per NUMA domain overlaps communication calls with computational tasks performed by a separate team of threads spawned using OpenMP nested parallelism. At a target production problem size of 81923 (0.5 trillion) grid points on 262,144 cores, CCD timings are reduced by 34% compared to a pure-MPI implementation. Timings for 163843 (4 trillion) grid points on 524,288 cores encouragingly maintain scalability greater than 90%, although the wall clock time is too high for production runs at this size. Performance monitoring with CrayPat for problem sizes up to 40963 shows that the CCD routines can achieve nearly 6% of the peak flop rate. The new DNS code is built upon two existing FPS and CCD codes. With the grid ratio Nθ /Nv = 8, the disparity in the computational requirements for the velocity and scalar problems is addressed by splitting the global communicator MPI_COMM_WORLD into disjoint communicators for the velocity and scalar fields, respectively. Inter-communicator transfer of the velocity field from the velocity communicator to the scalar communicator is handled with discrete send and non-blocking receive calls, which are overlapped with other operations on the scalar communicator. For production simulations at Nθ = 8192 and Nv = 1024 on 262,144 cores for the scalar field, the DNS code achieves 94% strong scaling relative to 65,536 cores and 92% weak scaling relative to Nθ = 1024 and Nv = 128 on 512 cores.

  2. ATR LEU Fuel and Burnable Absorber Neutronics Performance Optimization by Fuel Meat Thickness Variation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. S. Chang

    2007-09-01

    The Advanced Test Reactor (ATR) is a high power density and high neutron flux research reactor operating in the United States. Powered with highly enriched uranium (HEU), the ATR has a maximum thermal power rating of 250 MWth. Because of the large test volumes located in high flux areas, the ATR is an ideal candidate for assessing the feasibility of converting an HEU driven reactor to a low-enriched core. The present work investigates the necessary modifications and evaluates the subsequent operating effects of this conversion. A detailed plate-by-plate MCNP ATR 1/8th core model was developed and validated for a fuelmore » cycle burnup comparison analysis. Using the current HEU U 235 enrichment of 93.0 % as a baseline, an analysis can be performed to determine the low-enriched uranium (LEU) density and U-235 enrichment required in the fuel meat to yield an equivalent K-eff between the HEU core and the LEU core versus effective full power days (EFPD). The MCNP ATR 1/8th core model will be used to optimize the U-235 loading in the LEU core, such that the differences in K-eff and heat flux profile between the HEU and LEU core can be minimized. The depletion methodology MCWO was used to calculate K-eff versus EFPDs in this paper. The MCWO-calculated results for the LEU cases with foil (U-10Mo) types demonstrated adequate excess reactivity such that the K-eff versus EFPDs plot is similar to the reference ATR HEU case. Each HEU fuel element contains 19 fuel plates with a fuel meat thickness of 0.508 mm. In this work, the proposed LEU (U-10Mo) core conversion case with a nominal fuel meat thickness of 0.508 mm and the same U-235 enrichment (15.5 wt%) can be used to optimize the radial heat flux profile by varying the fuel plate thickness from 0.254 to 0.457 mm at the inner 4 fuel plates (1-4) and outer 4 fuel plates (16-19). In addition, a 0.7g of burnable absorber Boron-10 was added in the inner and outer plates to reduce the initial excess reactivity, and the inner/outer heat flux more effectively. The optimized LEU relative radial fission heat flux profile is bounded by the reference ATR HEU case. However, to demonstrate that the LEU core fuel cycle performance can meet the Updated Final Safety Analysis Report (UFSAR) safety requirements, additional studies will be necessary to evaluate and compare safety parameters such as void reactivity and Doppler coefficients, control components worth (outer shim control cylinders, safety rods and regulating rod), and shutdown margins between the HEU and LEU cores.« less

  3. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  4. Wind Farm Turbine Type and Placement Optimization

    NASA Astrophysics Data System (ADS)

    Graf, Peter; Dykes, Katherine; Scott, George; Fields, Jason; Lunacek, Monte; Quick, Julian; Rethore, Pierre-Elouan

    2016-09-01

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. This document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  5. Wind farm turbine type and placement optimization

    DOE PAGES

    Graf, Peter; Dykes, Katherine; Scott, George; ...

    2016-10-03

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  6. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  7. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  8. Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization

    NASA Astrophysics Data System (ADS)

    Alekseev, G. V.

    2018-04-01

    For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.

  9. Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kania, Adhe; Sidarto, Kuntjoro Adji

    2016-02-01

    Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.

  10. Rapid optimization of enzyme mixtures for deconstruction of diverse pretreatment/biomass feedstock combinations.

    PubMed

    Banerjee, Goutami; Car, Suzana; Scott-Craig, John S; Borrusch, Melissa S; Walton, Jonathan D

    2010-10-12

    Enzymes for plant cell wall deconstruction are a major cost in the production of ethanol from lignocellulosic biomass. The goal of this research was to develop optimized synthetic mixtures of enzymes for multiple pretreatment/substrate combinations using our high-throughput biomass digestion platform, GENPLAT, which combines robotic liquid handling, statistical experimental design and automated Glc and Xyl assays. Proportions of six core fungal enzymes (CBH1, CBH2, EG1, β-glucosidase, a GH10 endo-β1,4-xylanase, and β-xylosidase) were optimized at a fixed enzyme loading of 15 mg/g glucan for release of Glc and Xyl from all combinations of five biomass feedstocks (corn stover, switchgrass, Miscanthus, dried distillers' grains plus solubles [DDGS] and poplar) subjected to three alkaline pretreatments (AFEX, dilute base [0.25% NaOH] and alkaline peroxide [AP]). A 16-component mixture comprising the core set plus 10 accessory enzymes was optimized for three pretreatment/substrate combinations. Results were compared to the performance of two commercial enzymes (Accellerase 1000 and Spezyme CP) at the same protein loadings. When analyzed with GENPLAT, corn stover gave the highest yields of Glc with commercial enzymes and with the core set with all pretreatments, whereas corn stover, switchgrass and Miscanthus gave comparable Xyl yields. With commercial enzymes and with the core set, yields of Glc and Xyl were highest for grass stovers pretreated by AP compared to AFEX or dilute base. Corn stover, switchgrass and DDGS pretreated with AFEX and digested with the core set required a higher proportion of endo-β1,4-xylanase (EX3) and a lower proportion of endo-β1,4-glucanase (EG1) compared to the same materials pretreated with dilute base or AP. An optimized enzyme mixture containing 16 components (by addition of α-glucuronidase, a GH11 endoxylanase [EX2], Cel5A, Cel61A, Cip1, Cip2, β-mannanase, amyloglucosidase, α-arabinosidase, and Cel12A to the core set) was determined for AFEX-pretreated corn stover, DDGS, and AP-pretreated corn stover. The optimized mixture for AP-corn stover contained more exo-β1,4-glucanase (i.e., the sum of CBH1 + CBH2) and less endo-β1,4-glucanase (EG1 + Cel5A) than the optimal mixture for AFEX-corn stover. Amyloglucosidase and β-mannanase were the two most important enzymes for release of Glc from DDGS but were not required (i.e., 0% optimum) for corn stover subjected to AP or AFEX. As a function of enzyme loading over the range 0 to 30 mg/g glucan, Glc release from AP-corn stover reached a plateau of 60-70% Glc yield at a lower enzyme loading (5-10 mg/g glucan) than AFEX-corn stover. Accellerase 1000 was superior to Spezyme CP, the core set or the 16-component mixture for Glc yield at 12 h, but the 16-component set was as effective as the commercial enzyme mixtures at 48 h. The results in this paper demonstrate that GENPLAT can be used to rapidly produce enzyme cocktails for specific pretreatment/biomass combinations. Pretreatment conditions and feedstock source both influence the Glc and Xyl yields as well as optimal enzyme proportions. It is predicted that it will be possible to improve synthetic enzyme mixtures further by the addition of additional accessory enzymes.

  11. Biomedical imaging and therapy with physically and physiologically tailored magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Khandhar, Amit Praful

    Magnetic particle imaging (MPI) and magnetic fluid hyperthermia (MFH) are emerging imaging and therapy approaches that have the potential to improve diagnostic safety and disease management of heart disease and cancer - the number 1 and 2 leading causes of deaths in the United States. MPI promises real-time, tomographic and quantitative imaging of superparamagnetic iron oxide nanoparticle (SPION) tracers distributed in vivo, and is targeted to offer a safer angiography alternative for its first clinical application. MFH uses ac-fields to dissipate heat from SPIONs that can be delivered locally to promote hyperthermia therapy (~42°C) in cancer cells. Both technologies use safe radiofrequency magnetic fields to exploit the fundamental magnetic relaxation properties of superparamagnetic iron oxide nanoparticles (SPIONs), which must be tailored for optimal imaging in the case of MPI, and maximum hyperthermia potency in the case of MFH. Furthermore, the magnetic core and shell of SPIONs are both central to the optimization process; the shell, in particular, bridges the translational gap between the optimized core and its safe and effective use in the physiological environment. Unfortunately, existing SPIONs that were originally designed as MRI contrast agents lack the basic physical properties that enable the clinical translation of MPI and MFH. In this work, the core and shell of monodisperse SPIONs were optimized in concert to accomplish two equally important objectives: (1) biocompatibility, and (2) MPI and MFH efficacy of SPIONs in physiological environments. Critically, it was found that the physical and physiological responses of SPIONs are coupled, and impacting one can have consequences on the other. It was shown that the poly(ethylene glycol) (PEG)-based shell when properly optimized reduced protein adsorption to SPION surface and phagocytic uptake in macrophages - both prerequisites for designing long-circulating SPIONs. In MPI, tailoring the surface coating reduced protein adsorption and improved colloidal stability, which were critical in retaining the magnetization relaxation properties of the SPIONs. The improvements in surface coatings enabled the use of larger SPION cores (> 20 nm core diameter), which were used to demonstrate benchmark-imaging performance in some of the world's first MPI scanners at Philips Medical Imaging and University of California, Berkeley. In MFH, it was shown for the first time that optimization of heat loss from SPIONs (W/g) is possible by tailoring the core size and size distribution for the given ac-field conditions. Biodistribution and blood circulation studies in mice showed that SPIONs accumulated primarily in the liver and spleen with minimal renal involvement, and demonstrated gradual clearance. Circulation time was evaluated using the MPI signal detected over time in blood, which offered insight on the relevant circulation time for angiography applications. In comparison with carboxy-dextran coated ResovistRTM SPIONs, the PEG-coated SPIONs developed in this work circulated substantially longer; furthermore, reducing the hydrodynamic diameter showed a 4.5x improvement in blood half-life. The work presented in this thesis demonstrates that the combined effort in optimizing the core and shell properties of SPIONs enhances biocompatibility and efficacy, with the in vivo studies providing critical feedback on the success (or failure) of the optimization process. Future work will entail designing functionalized SPIONs for targeting specific disease sites, which will further enable the molecular level diagnosis and therapy of diseases.

  12. Generating unstructured nuclear reactor core meshes in parallel

    DOE PAGES

    Jain, Rajeev; Tautges, Timothy J.

    2014-10-24

    Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less

  13. Nash equilibrium and multi criterion aerodynamic optimization

    NASA Astrophysics Data System (ADS)

    Tang, Zhili; Zhang, Lianhe

    2016-06-01

    Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.

  14. Exact solution of large asymmetric traveling salesman problems.

    PubMed

    Miller, D L; Pekny, J F

    1991-02-15

    The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.

  15. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  16. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella

    2018-03-01

    Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.

  17. Optimal control of a harmonic oscillator: Economic interpretations

    NASA Astrophysics Data System (ADS)

    Janová, Jitka; Hampel, David

    2013-10-01

    Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.

  18. Intel Many Integrated Core (MIC) architecture optimization strategies for a memory-bound Weather Research and Forecasting (WRF) Goddard microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.

  19. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  20. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  1. Singular perturbation analysis of AOTV-related trajectory optimization problems

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1990-01-01

    The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.

  2. Conceptual Comparison of Population Based Metaheuristics for Engineering Problems

    PubMed Central

    Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265

  3. Conceptual comparison of population based metaheuristics for engineering problems.

    PubMed

    Adekanmbi, Oluwole; Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.

  4. Efficiency of quantum vs. classical annealing in nonconvex learning problems

    PubMed Central

    Zecchina, Riccardo

    2018-01-01

    Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions. PMID:29382764

  5. Direct Multiple Shooting Optimization with Variable Problem Parameters

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan J.; Ocampo, Cesar A.

    2009-01-01

    Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.

  6. The pseudo-Boolean optimization approach to form the N-version software structure

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  7. Tiltrotor Research Aircraft composite blade repairs - Lessons learned

    NASA Technical Reports Server (NTRS)

    Espinosa, Paul S.; Groepler, David R.

    1992-01-01

    The XV-15, N703NA Tiltrotor Research Aircraft located at the NASA Ames Research Center, Moffett Field, California, currently uses a set of composite rotor blades of complex shape known as the advanced technology blades (ATBs). The main structural element of the blades is a D-spar constructed of unidirectional, angled fiberglass/graphite, with the aft fairing portion of the blades constructed of a fiberglass cross-ply skin bonded to a Nomex honeycomb core. The blade tip is a removable laminate shell that fits over the outboard section of the spar structure, which contains a cavity to retain balance weights. Two types of tip shells are used for research. One is highly twisted (more than a conventional helicopter blade) and has a hollow core constructed of a thin Nomex-honeycomb-and-fiberglass-skin sandwich; the other is untwisted with a solid Nomex honeycomb core and a fiberglass cross-ply skin. During initial flight testing of the blades, a number of problems in the composite structure were encountered. These problems included debonding between the fiberglass skin and the honeycomb core, failure of the honeycomb core, failures in fiberglass splices, cracks in fiberglass blocks, misalignment of mated composite parts, and failures of retention of metal fasteners. Substantial time was spent in identifying and repairing these problems. Discussed here are the types of problems encountered, the inspection procedures used to identify each problem, the repairs performed on the damaged or flawed areas, the level of criticality of the problems, and the monitoring of repaired areas. It is hoped that this discussion will help designers, analysts, and experimenters in the future as the use of composites becomes more prevalent.

  8. Tiltrotor research aircraft composite blade repairs: Lessons learned

    NASA Technical Reports Server (NTRS)

    Espinosa, Paul S.; Groepler, David R.

    1991-01-01

    The XV-15, N703NA Tiltrotor Research Aircraft located at the NASA Ames Research Center, Moffett Field, California, currently uses a set of composite rotor blades of complex shape known as the advanced technology blades (ATBs). The main structural element of the blades is a D-spar constructed of unidirectional, angled fiberglass/graphite, with the aft fairing portion of the blades constructed of a fiberglass cross-ply skin bonded to a Nomex honeycomb core. The blade tip is a removable laminate shell that fits over the outboard section of the spar structure, which contains a cavity to retain balance weights. Two types of tip shells are used for research. One is highly twisted (more than a conventional helicopter blade) and has a hollow core constructed of a thin Nomex-honeycomb-and-fiberglass-skin sandwich; the other is untwisted with a solid Nomex honeycomb core and a fiberglass cross-ply skin. During initial flight testing of the blades, a number of problems in the composite structure were encountered. These problems included debonding between the fiberglass skin and the honeycomb core, failure of the honeycomb core, failures in fiberglass splices, cracks in fiberglass blocks, misalignment of mated composite parts, and failures of retention of metal fasteners. Substantial time was spent in identifying and repairing these problems. Discussed here are the types of problems encountered, the inspection procedures used to identify each problem, the repairs performed on the damaged or flawed areas, the level of criticality of the problems, and the monitoring of repaired areas. It is hoped that this discussion will help designers, analysts, and experimenters in the future as the use of composites becomes more prevalent.

  9. Structure and Activity of a New Low Molecular Weight Heparin Produced by Enzymatic Ultrafiltration

    PubMed Central

    FU, LI; ZHANG, FUMING; LI, GUOYUN; ONISHI, AKIHIRO; BHASKAR, UJJWAL; SUN, PEILONG; LINHARDT, ROBERT J.

    2014-01-01

    The standard process for preparing the low molecular weight heparin (LMWH) tinzaparin, through the partial enzymatic depolymerization of heparin, results in a reduced yield due to the formation of a high content of undesired disaccharides and tetrasaccharides. An enzymatic ultrafiltration reactor for LMWH preparation was developed to overcome this problem. The behavior, of the heparin oligosaccharides and polysaccharides using various membranes and conditions, was investigated to optimize this reactor. A novel product, LMWH-II, was produced from the controlled depolymerization of heparin using heparin lyase II in this optimized ultrafiltration reactor. Enzymatic ultrafiltration provides easy control and high yields (>80%) of LMWH-II. The molecular weight properties of LMWH-II were similar to other commercial LMWHs. The structure of LMWH-II closely matched heparin’s core structural features. Most of the common process artifacts, present in many commercial LWMHs, were eliminated as demonstrated by 1D and 2D nuclear magnetic resonance spectroscopy. The antithrombin III and platelet factor-4 binding affinity of LMWH-II were comparable to commercial LMWHs, as was its in vitro anticoagulant activity. PMID:24634007

  10. Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768

  11. Core III Materials for Metropolitan Agriculture/Horticulture Programs. Units A-I.

    ERIC Educational Resources Information Center

    Biondo, Ron; And Others

    This first volume of a two-volume curriculum guide contains 11 problem areas selected for study to be included in a core curriculum for 11th-grade or third-year students enrolled in a metropolitan agricultural program. The 11 problem areas are divided into eight units: Orientation to Agricultural Occupations (Gaining Employment), Supervised…

  12. Core III Materials for Rural Agriculture Programs. Units H-I.

    ERIC Educational Resources Information Center

    Courson, Roger L.; And Others

    This curriculum guide includes teaching packets for nine problem areas of study to be included in a core curriculum for 11th-grade or third-year students enrolled in rural agricultural programs in Illinois. Each problem area includes some or all of the following components: suggestions to the teacher, a teacher guide, a competency inventory, an…

  13. Integration of Biological Applications into the Core Undergraduate Curriculum: A Practical Strategy

    ERIC Educational Resources Information Center

    Komives, Claire; Prince, Michael; Fernandez, Erik; Balcarcel, Robert

    2011-01-01

    A web database of solved problems has been created to enable faculty to incorporate biological applications into core courses. Over 20% of US ChE departments utilized problems from the website, and 19 faculty attended a workshop to facilitate teaching the modules. Assessment of student learning showed some gains related to biological outcomes, as…

  14. Naming Problems Do Not Reflect a Second Independent Core Deficit in Dyslexia: Double Deficits Explored

    ERIC Educational Resources Information Center

    Vaessen, Anniek; Gerretsen, Patty; Blomert, Leo

    2009-01-01

    The double deficit hypothesis states that naming speed problems represent a second core deficit in dyslexia independent from a phonological deficit. The current study investigated the main assumptions of this hypothesis in a large sample of well-diagnosed dyslexics. The three main findings were that (a) naming speed was consistently related only…

  15. Development of a Stiffness-Based Chemistry Load Balancing Scheme, and Optimization of Input/Output and Communication, to Enable Massively Parallel High-Fidelity Internal Combustion Engine Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kodavasal, Janardhan; Harms, Kevin; Srivastava, Priyesh

    A closed-cycle gasoline compression ignition engine simulation near top dead center (TDC) was used to profile the performance of a parallel commercial engine computational fluid dynamics code, as it was scaled on up to 4096 cores of an IBM Blue Gene/Q supercomputer. The test case has 9 million cells near TDC, with a fixed mesh size of 0.15 mm, and was run on configurations ranging from 128 to 4096 cores. Profiling was done for a small duration of 0.11 crank angle degrees near TDC during ignition. Optimization of input/output performance resulted in a significant speedup in reading restart files, andmore » in an over 100-times speedup in writing restart files and files for post-processing. Improvements to communication resulted in a 1400-times speedup in the mesh load balancing operation during initialization, on 4096 cores. An improved, “stiffness-based” algorithm for load balancing chemical kinetics calculations was developed, which results in an over 3-times faster run-time near ignition on 4096 cores relative to the original load balancing scheme. With this improvement to load balancing, the code achieves over 78% scaling efficiency on 2048 cores, and over 65% scaling efficiency on 4096 cores, relative to 256 cores.« less

  16. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  17. Exact solution for an optimal impermeable parachute problem

    NASA Astrophysics Data System (ADS)

    Lupu, Mircea; Scheiber, Ernest

    2002-10-01

    In the paper there are solved direct and inverse boundary problems and analytical solutions are obtained for optimization problems in the case of some nonlinear integral operators. It is modeled the plane potential flow of an inviscid, incompressible and nonlimited fluid jet, witch encounters a symmetrical, curvilinear obstacle--the deflector of maximal drag. There are derived integral singular equations, for direct and inverse problems and the movement in the auxiliary canonical half-plane is obtained. Next, the optimization problem is solved in an analytical manner. The design of the optimal airfoil is performed and finally, numerical computations concerning the drag coefficient and other geometrical and aerodynamical parameters are carried out. This model corresponds to the Helmholtz impermeable parachute problem.

  18. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  19. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  20. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  1. Structural Analysis and Optimization of a Composite Fan Blade for Future Aircraft Engine

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula M.

    2012-01-01

    This report addresses the structural analysis and optimization of a composite fan blade sized for a large aircraft engine. An existing baseline solid metallic fan blade was used as a starting point to develop a hybrid honeycomb sandwich construction with a polymer matrix composite face sheet and honeycomb aluminum core replacing the original baseline solid metallic fan model made of titanium. The focus of this work is to design the sandwich composite blade with the optimum number of plies for the face sheet that will withstand the combined pressure and centrifugal loads while the constraints are satisfied and the baseline aerodynamic and geometric parameters are maintained. To satisfy the requirements, a sandwich construction for the blade is proposed with composite face sheets and a weak core made of honeycomb aluminum material. For aerodynamic considerations, the thickness of the core is optimized whereas the overall blade thickness is held fixed so as to not alter the original airfoil geometry. Weight is taken as the objective function to be minimized by varying the core thickness of the blade within specified upper and lower bounds. Constraints are imposed on radial displacement limitations and ply failure strength. From the optimum design, the minimum number of plies, which will not fail, is back-calculated. The ply lay-up of the blade is adjusted from the calculated number of plies and final structural analysis is performed. Analyses were carried out by utilizing the OpenMDAO Framework, developed at NASA Glenn Research Center combining optimization with structural assessment.

  2. Preparation of bilayer-core osmotic pump tablet by coating the indented core tablet.

    PubMed

    Liu, Longxiao; Xu, Xiangning

    2008-03-20

    In this paper, a bilayer-core osmotic pump tablet (OPT) which does not require laser drilling to form the drug delivery orifice is described. The bilayer-core consisted of two layers: (a) push layer and (b) drug layer, and was made with a modified upper tablet punch, which produced an indentation at the center of the drug layer surface. The indented tablets were coated by using a conventional pan-coating process. Although the bottom of the indentation could be coated, the side face of the indentation was scarcely sprayed by the coating solution and this part of the tablet remained at least partly uncoated leaving an aperture from which drug release could occur. Nifedipine was selected as the model drug. Sodium chloride was used as osmotic agent, polyvinylpyrrolidone as suspending agent and croscarmellose sodium as expanding agent. The indented core tablet was coated by ethyl cellulose as semipermeable membrane containing polyethylene glycol 400 for controlling the membrane permeability. The formulation of core tablet was optimized by orthogonal design and the release profiles of various formulations were evaluated by similarity factor (f(2)). It was found that the optimal OPT was able to deliver nifedipine at an approximate zero-order up to 24 h, independent on both release media and agitation rates. The preparation of bilayer-core OPT was simplified by coating the indented core tablet, by which sophisticated technology of the drug layer identification and laser drilling could be eliminated. It might be promising in the field of preparation of bilayer-core OPT.

  3. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Grout, Ray W

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved heremore » through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.« less

  4. Core Hunter 3: flexible core subset selection.

    PubMed

    De Beukelaer, Herman; Davenport, Guy F; Fack, Veerle

    2018-05-31

    Core collections provide genebank curators and plant breeders a way to reduce size of their collections and populations, while minimizing impact on genetic diversity and allele frequency. Many methods have been proposed to generate core collections, often using distance metrics to quantify the similarity of two accessions, based on genetic marker data or phenotypic traits. Core Hunter is a multi-purpose core subset selection tool that uses local search algorithms to generate subsets relying on one or more metrics, including several distance metrics and allelic richness. In version 3 of Core Hunter (CH3) we have incorporated two new, improved methods for summarizing distances to quantify diversity or representativeness of the core collection. A comparison of CH3 and Core Hunter 2 (CH2) showed that these new metrics can be effectively optimized with less complex algorithms, as compared to those used in CH2. CH3 is more effective at maximizing the improved diversity metric than CH2, still ensures a high average and minimum distance, and is faster for large datasets. Using CH3, a simple stochastic hill-climber is able to find highly diverse core collections, and the more advanced parallel tempering algorithm further increases the quality of the core and further reduces variability across independent samples. We also evaluate the ability of CH3 to simultaneously maximize diversity, and either representativeness or allelic richness, and compare the results with those of the GDOpt and SimEli methods. CH3 can sample equally representative cores as GDOpt, which was specifically designed for this purpose, and is able to construct cores that are simultaneously more diverse, and either are more representative or have higher allelic richness, than those obtained by SimEli. In version 3, Core Hunter has been updated to include two new core subset selection metrics that construct cores for representativeness or diversity, with improved performance. It combines and outperforms the strengths of other methods, as it (simultaneously) optimizes a variety of metrics. In addition, CH3 is an improvement over CH2, with the option to use genetic marker data or phenotypic traits, or both, and improved speed. Core Hunter 3 is freely available on http://www.corehunter.org .

  5. CORAL: aligning conserved core regions across domain families.

    PubMed

    Fong, Jessica H; Marchler-Bauer, Aron

    2009-08-01

    Homologous protein families share highly conserved sequence and structure regions that are frequent targets for comparative analysis of related proteins and families. Many protein families, such as the curated domain families in the Conserved Domain Database (CDD), exhibit similar structural cores. To improve accuracy in aligning such protein families, we propose a profile-profile method CORAL that aligns individual core regions as gap-free units. CORAL computes optimal local alignment of two profiles with heuristics to preserve continuity within core regions. We benchmarked its performance on curated domains in CDD, which have pre-defined core regions, against COMPASS, HHalign and PSI-BLAST, using structure superpositions and comprehensive curator-optimized alignments as standards of truth. CORAL improves alignment accuracy on core regions over general profile methods, returning a balanced score of 0.57 for over 80% of all domain families in CDD, compared with the highest balanced score of 0.45 from other methods. Further, CORAL provides E-values to aid in detecting homologous protein families and, by respecting block boundaries, produces alignments with improved 'readability' that facilitate manual refinement. CORAL will be included in future versions of the NCBI Cn3D/CDTree software, which can be downloaded at http://www.ncbi.nlm.nih.gov/Structure/cdtree/cdtree.shtml. Supplementary data are available at Bioinformatics online.

  6. Experimental and computational studies on the femoral fracture risk for advanced core decompression.

    PubMed

    Tran, T N; Warwas, S; Haversath, M; Classen, T; Hohn, H P; Jäger, M; Kowalczyk, W; Landgraeber, S

    2014-04-01

    Two questions are often addressed by orthopedists relating to core decompression procedure: 1) Is the core decompression procedure associated with a considerable lack of structural support of the bone? and 2) Is there an optimal region for the surgical entrance point for which the fracture risk would be lowest? As bioresorbable bone substitutes become more and more common and core decompression has been described in combination with them, the current study takes this into account. Finite element model of a femur treated by core decompression with bone substitute was simulated and analyzed. In-vitro compression testing of femora was used to confirm finite element results. The results showed that for core decompression with standard drilling in combination with artificial bone substitute refilling, daily activities (normal walking and walking downstairs) are not risky for femoral fracture. The femoral fracture risk increased successively when the entrance point is located further distal. The critical value of the deviation of the entrance point to a more distal part is about 20mm. The study findings demonstrate that optimal entrance point should locate on the proximal subtrochanteric region in order to reduce the subtrochanteric fracture risk. Furthermore the consistent results of finite element and in-vitro testing imply that the simulations are sufficient. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  8. Topology optimization of unsteady flow problems using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Nørgaard, Sebastian; Sigmund, Ole; Lazarov, Boyan

    2016-02-01

    This article demonstrates and discusses topology optimization for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann method, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the optimization problems. The optimization problem is solved with a gradient based method, and the design sensitivities are computed by solving the discrete adjoint problem. For moderate Reynolds number flows, it is demonstrated that topology optimization can successfully account for unsteady effects such as vortex shedding and time-varying boundary conditions. Such effects are relevant in several engineering applications, i.e. fluid pumps and control valves.

  9. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  10. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  11. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    NASA Astrophysics Data System (ADS)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  12. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  13. An improved method for field extraction and laboratory analysis of large, intact soil cores

    USGS Publications Warehouse

    Tindall, J.A.; Hemmen, K.; Dowd, J.F.

    1992-01-01

    Various methods have been proposed for the extraction of large, undisturbed soil cores and for subsequent analysis of fluid movement within the cores. The major problems associated with these methods are expense, cumbersome field extraction, and inadequate simulation of unsaturated flow conditions. A field and laboratory procedure is presented that is economical, convenient, and simulates unsaturated and saturated flow without interface flow problems and can be used on a variety of soil types. In the field, a stainless steel core barrel is hydraulically pressed into the soil (30-cm diam. and 38 cm high), the barrel and core are extracted from the soil, and after the barrel is removed from the core, the core is then wrapped securely with flexible sheet metal and a stainless mesh screen is attached to the bottom of the core for support. In the laboratory the soil core is set atop a porous ceramic plate over which a soil-diatomaceous earth slurry has been poured to assure good contact between plate and core. A cardboard cylinder (mold) is fastened around the core and the empty space filled with paraffin wax. Soil cores were tested under saturated and unsaturated conditions using a hanging water column for potentials ???0. Breakthrough curves indicated that no interface flow occurred along the edge of the core. This procedure proved to be reliable for field extraction of large, intact soil cores and for laboratory analysis of solute transport.

  14. Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft

    NASA Astrophysics Data System (ADS)

    Rasotto, M.; Armellin, R.; Di Lizia, P.

    2016-03-01

    An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.

  15. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  16. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  17. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  18. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  19. The expanded invasive weed optimization metaheuristic for solving continuous and discrete optimization problems.

    PubMed

    Josiński, Henryk; Kostrzewa, Daniel; Michalczuk, Agnieszka; Switoński, Adam

    2014-01-01

    This paper introduces an expanded version of the Invasive Weed Optimization algorithm (exIWO) distinguished by the hybrid strategy of the search space exploration proposed by the authors. The algorithm is evaluated by solving three well-known optimization problems: minimization of numerical functions, feature selection, and the Mona Lisa TSP Challenge as one of the instances of the traveling salesman problem. The achieved results are compared with analogous outcomes produced by other optimization methods reported in the literature.

  20. Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Oliker, Leonid; Vuduc, Richard

    2008-10-16

    We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one ofmore » the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less

  1. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  2. Misfit stresses in a composite core-shell nanowire with an eccentric parallelepipedal core subjected to one-dimensional cross dilatation eigenstrain

    NASA Astrophysics Data System (ADS)

    Krasnitckii, S. A.; Kolomoetc, D. R.; Smirnov, A. M.; Gutkin, M. Yu

    2017-05-01

    The boundary-value problem in the classical theory of elasticity for a core-shell nanowire with an eccentric parallelepipedal core of an arbitrary rectangular cross section is solved. The core is subjected to one-dimensional cross dilatation eigenstrain. The misfit stresses are given in a closed analytical form suitable for theoretical modeling of misfit accommodation in relevant heterostructures.

  3. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  4. EVALUATION OF AN ADVANCED ENGINEERING TEST REACTOR DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McVey, M.; Bradfute, J.O.; Buck, K.E.

    1958-07-15

    The scope of the study was primarily concerned with optimization of the geometrical and core-composition variables to achieve maximum flux in the loop region per unit core power without exceeding heat transfer and other engineering limitations. Centain other design questions are to be investigated. (A.C.)

  5. Next-generation acceleration and code optimization for light transport in turbid media using GPUs

    PubMed Central

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-01-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498

  6. Content-aware photo collage using circle packing.

    PubMed

    Yu, Zongqiao; Lu, Lin; Guo, Yanwen; Fan, Rongfei; Liu, Mingming; Wang, Wenping

    2014-02-01

    In this paper, we present a novel approach for automatically creating the photo collage that assembles the interest regions of a given group of images naturally. Previous methods on photo collage are generally built upon a well-defined optimization framework, which computes all the geometric parameters and layer indices for input photos on the given canvas by optimizing a unified objective function. The complex nonlinear form of optimization function limits their scalability and efficiency. From the geometric point of view, we recast the generation of collage as a region partition problem such that each image is displayed in its corresponding region partitioned from the canvas. The core of this is an efficient power-diagram-based circle packing algorithm that arranges a series of circles assigned to input photos compactly in the given canvas. To favor important photos, the circles are associated with image importances determined by an image ranking process. A heuristic search process is developed to ensure that salient information of each photo is displayed in the polygonal area resulting from circle packing. With our new formulation, each factor influencing the state of a photo is optimized in an independent stage, and computation of the optimal states for neighboring photos are completely decoupled. This improves the scalability of collage results and ensures their diversity. We also devise a saliency-based image fusion scheme to generate seamless compositive collage. Our approach can generate the collages on nonrectangular canvases and supports interactive collage that allows the user to refine collage results according to his/her personal preferences. We conduct extensive experiments and show the superiority of our algorithm by comparing against previous methods.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Kuo -Ling; Mehrotra, Sanjay

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  8. A noisy chaotic neural network for solving combinatorial optimization problems: stochastic chaotic simulated annealing.

    PubMed

    Wang, Lipo; Li, Sa; Tian, Fuyu; Fu, Xiuju

    2004-10-01

    Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications.

  9. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  10. Optimal Control Problems with Switching Points. Ph.D. Thesis, 1990 Final Report

    NASA Technical Reports Server (NTRS)

    Seywald, Hans

    1991-01-01

    The main idea of this report is to give an overview of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  11. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  12. Micromechanical analysis and design of an integrated thermal protection system for future space vehicles

    NASA Astrophysics Data System (ADS)

    Martinez, Oscar

    Thermal protection systems (TPS) are the key features incorporated into a spacecraft's design to protect it from severe aerodynamic heating during high-speed travel through planetary atmospheres. The thermal protection system is the key technology that enables a spacecraft to be lightweight, fully reusable, and easily maintainable. Add-on TPS concepts have been used since the beginning of the space race. The Apollo space capsule used ablative TPS and the Space Shuttle Orbiter TPS technology consisted of ceramic tiles and blankets. Many problems arose from the add-on concept such as incompatibility, high maintenance costs, non-load bearing, and not being robust and operable. To make the spacecraft's TPS more reliable, robust, and efficient, we investigated Integral Thermal Protection System (ITPS) concept in which the load-bearing structure and the TPS are combined into one single component. The design of an ITPS was a challenging task, because the requirement of a load-bearing structure and a TPS are often conflicting. Finite element (FE) analysis is often the preferred method of choice for a structural analysis problem. However, as the structure becomes complex, the computational time and effort for an FE analysis increases. New structural analytical tools were developed, or available ones were modified, to perform a full structural analysis of the ITPS. With analytical tools, the designer is capable of obtaining quick and accurate results and has a good idea of the response of the structure without having to go to an FE analysis. A MATLABRTM code was developed to analytically determine performance metrics of the ITPS such as stresses, buckling, deflection, and other failure modes. The analytical models provide fast and accurate results that were within 5% difference from the FEM results. The optimization procedure usually performs 100 function evaluations for every design variable. Using the analytical models in the optimization procedure was a time saver, because the optimization time to reach an optimum design was reached in less than an hour, where as an FE optimization study would take hours to reach an optimum design. Corrugated-core structures were designed for ITPS applications with loads and boundary conditions similar to that of a Space Shuttle-like vehicle. Temperature, buckling, deflection and stress constraints were considered for the design and optimization process. An optimized design was achieved with consideration of all the constraints. The ITPS design obtained from the analytical solutions was lighter (4.38 lb/ft2) when compared to the ITPS design obtained from a finite element analysis (4.85 lb/ft 2). The ITPS boundary effects added local stresses and compressive loads to the top facesheet that was not able to be captured by the 2D plate solutions. The inability to fully capture the boundary effects lead to a lighter ITPS when compared to the FE solution. However, the ITPS can withstand substantially large mechanical loads when compared to the previous designs. Truss-core structures were found to be unsuitable as they could not withstand the large thermal gradients frequently encountered in ITPS applications.

  13. TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization

    DTIC Science & Technology

    2016-11-28

    objective 9 4.6 On The Recoverable Robust Traveling Salesman Problem . . . . . 11 4.7 A Bicriteria Approach to Robust Optimization...be found. 4.6 On The Recoverable Robust Traveling Salesman Problem The traveling salesman problem (TSP) is a well-known combinatorial optimiza- tion...procedure for the robust traveling salesman problem . While this iterative algorithms results in an optimal solution to the robust TSP, computation

  14. A Mathematical Optimization Problem in Bioinformatics

    ERIC Educational Resources Information Center

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  15. A Problem on Optimal Transportation

    ERIC Educational Resources Information Center

    Cechlarova, Katarina

    2005-01-01

    Mathematical optimization problems are not typical in the classical curriculum of mathematics. In this paper we show how several generalizations of an easy problem on optimal transportation were solved by gifted secondary school pupils in a correspondence mathematical seminar, how they can be used in university courses of linear programming and…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter; Dykes, Katherine; Scott, George

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  17. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE PAGES

    Nicholson, Bethany; Siirola, John

    2017-11-11

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  18. Computational alternatives to obtain time optimal jet engine control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Basso, R. J.; Leake, R. J.

    1976-01-01

    Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.

  19. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  20. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  1. Excore Modeling with VERAShift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.

    It is important to be able to accurately predict the neutron flux outside the immediate reactor core for a variety of safety and material analyses. Monte Carlo radiation transport calculations are required to produce the high fidelity excore responses. Under this milestone VERA (specifically the VERAShift package) has been extended to perform excore calculations by running radiation transport calculations with Shift. This package couples VERA-CS with Shift to perform excore tallies for multiple state points concurrently, with each component capable of parallel execution on independent domains. Specifically, this package performs fluence calculations in the core barrel and vessel, or, performsmore » the requested tallies in any user-defined excore regions. VERAShift takes advantage of the general geometry package in Shift. This gives VERAShift the flexibility to explicitly model features outside the core barrel, including detailed vessel models, detectors, and power plant details. A very limited set of experimental and numerical benchmarks is available for excore simulation comparison. The Consortium for the Advanced Simulation of Light Water Reactors (CASL) has developed a set of excore benchmark problems to include as part of the VERA-CS verification and validation (V&V) problems. The excore capability in VERAShift has been tested on small representative assembly problems, multiassembly problems, and quarter-core problems. VERAView has also been extended to visualize these vessel fluence results from VERAShift. Preliminary vessel fluence results for quarter-core multistate calculations look very promising. Further development is needed to determine the details relevant to excore simulations. Validation of VERA for fluence and excore detectors still needs to be performed against experimental and numerical results.« less

  2. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  3. Geomagnetic inverse problem and data assimilation: a progress report

    NASA Astrophysics Data System (ADS)

    Aubert, Julien; Fournier, Alexandre

    2013-04-01

    In this presentation I will present two studies recently undertaken by our group in an effort to bring the benefits of data assimilation to the study of Earth's magnetic field and the dynamics of its liquid iron core, where the geodynamo operates. In a first part I will focus on the geomagnetic inverse problem, which attempts to recover the fluid flow in the core from the temporal variation of the magnetic field (known as the secular variation). Geomagnetic data can be downward continued from the surface of the Earth down to the core-mantle boundary, but not further below, since the core is an electrical conductor. Historically, solutions to the geomagnetic inverse problem in such a sparsely observed system were thus found only for flow immediately below the core mantle boundary. We have recently shown that combining a numerical model of the geodynamo together with magnetic observations, through the use of Kalman filtering, now allows to present solutions for flow throughout the core. In a second part, I will present synthetic tests of sequential geomagnetic data assimilation aiming at evaluating the range at which the future of the geodynamo can be predicted, and our corresponding prospects to refine the current geomagnetic predictions. Fournier, Aubert, Thébault: Inference on core surface flow from observations and 3-D dynamo modelling, Geophys. J. Int. 186, 118-136, 2011, doi: 10.1111/j.1365-246X.2011.05037.x Aubert, Fournier: Inferring internal properties of Earth's core dynamics and their evolution from surface observations and a numerical geodynamo model, Nonlinear Proc. Geoph. 18, 657-674, 2011, doi:10.5194/npg-18-657-2011 Aubert: Flow throughout the Earth's core inverted from geomagnetic observations and numerical dynamo models, Geophys. J. Int., 2012, doi: 10.1093/gji/ggs051

  4. An exploration of the properties of the CORE problem list subset and how it facilitates the implementation of SNOMED CT

    PubMed Central

    Xu, Julia

    2015-01-01

    Objective Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) is the emergent international health terminology standard for encoding clinical information in electronic health records. The CORE Problem List Subset was created to facilitate the terminology’s implementation. This study evaluates the CORE Subset’s coverage and examines its growth pattern as source datasets are being incorporated. Methods Coverage of frequently used terms and the corresponding usage of the covered terms were assessed by “leave-one-out” analysis of the eight datasets constituting the current CORE Subset. The growth pattern was studied using a retrospective experiment, growing the Subset one dataset at a time and examining the relationship between the size of the starting subset and the coverage of frequently used terms in the incoming dataset. Linear regression was used to model that relationship. Results On average, the CORE Subset covered 80.3% of the frequently used terms of the left-out dataset, and the covered terms accounted for 83.7% of term usage. There was a significant positive correlation between the CORE Subset’s size and the coverage of the frequently used terms in an incoming dataset. This implies that the CORE Subset will grow at a progressively slower pace as it gets bigger. Conclusion The CORE Problem List Subset is a useful resource for the implementation of Systematized Nomenclature of Medicine Clinical Terms in electronic health records. It offers good coverage of frequently used terms, which account for a high proportion of term usage. If future datasets are incorporated into the CORE Subset, it is likely that its size will remain small and manageable. PMID:25725003

  5. Dynamic programming and graph algorithms in computer vision.

    PubMed

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  6. Pseudo-point transport technique: a new method for solving the Boltzmann transport equation in media with highly fluctuating cross sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakhai, B.

    A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less

  7. Electric Grid Expansion Planning with High Levels of Variable Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, Stanton W.; You, Shutang; Shankar, Mallikarjun

    2016-02-01

    Renewables are taking a large proportion of generation capacity in U.S. power grids. As their randomness has increasing influence on power system operation, it is necessary to consider their impact on system expansion planning. To this end, this project studies the generation and transmission expansion co-optimization problem of the US Eastern Interconnection (EI) power grid with a high wind power penetration rate. In this project, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. This study analyzed a time series creation method to capture the diversity of load and wind powermore » across balancing regions in the EI system. The obtained time series can be easily introduced into the MIP co-optimization problem and then solved robustly through available MIP solvers. Simulation results show that the proposed time series generation method and the expansion co-optimization model and can improve the expansion result significantly after considering the diversity of wind and load across EI regions. The improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare. This study shows that modelling load and wind variations and diversities across balancing regions will produce significantly different expansion result compared with former studies. For example, if wind is modeled in more details (by increasing the number of wind output levels) so that more wind blocks are considered in expansion planning, transmission expansion will be larger and the expansion timing will be earlier. Regarding generation expansion, more wind scenarios will slightly reduce wind generation expansion in the EI system and increase the expansion of other generation such as gas. Also, adopting detailed wind scenarios will reveal that it may be uneconomic to expand transmission networks for transmitting a large amount of wind power through a long distance in the EI system. Incorporating more details of renewables in expansion planning will inevitably increase the computational burden. Therefore, high performance computing (HPC) techniques are urgently needed for power system operation and planning optimization. As a scoping study task, this project tested some preliminary parallel computation techniques such as breaking down the simulation task into several sub-tasks based on chronology splitting or sample splitting, and then assigning these sub-tasks to different cores. Testing results show significant time reduction when a simulation task is split into several sub-tasks for parallel execution.« less

  8. A current review of core decompression in the treatment of osteonecrosis of the femoral head.

    PubMed

    Pierce, Todd P; Jauregui, Julio J; Elmallah, Randa K; Lavernia, Carlos J; Mont, Michael A; Nace, James

    2015-09-01

    The review describes the following: (1) how traditional core decompression is performed, (2) adjunctive treatments, (3) multiple percutaneous drilling technique, and (4) the overall outcomes of these procedures. Core decompression has optimal outcomes when used in the earliest, precollapse disease stages. More recent studies have reported excellent outcomes with percutaneous drilling. Furthermore, adjunct treatment methods combining core decompression with growth factors, bone morphogenic proteins, stem cells, and bone grafting have demonstrated positive results; however, larger randomized trial is needed to evaluate their overall efficacy.

  9. Phosphate-core silica-clad Er/Yb-doped optical fiber and cladding pumped laser.

    PubMed

    Egorova, O N; Semjonov, S L; Velmiskin, V V; Yatsenko, Yu P; Sverchkov, S E; Galagan, B I; Denker, B I; Dianov, E M

    2014-04-07

    We present a composite optical fiber with a Er/Yb co-doped phosphate-glass core in a silica glass cladding as well as cladding pumped laser. The fabrication process, optical properties, and lasing parameters are described. The slope efficiency under 980 nm cladding pumping reached 39% with respect to the absorbed pump power and 28% with respect to the coupled pump power. Due to high doping level of the phosphate core optimal length was several times shorter than that of silica core fibers.

  10. An optimization method for the problems of thermal cloaking of material bodies

    NASA Astrophysics Data System (ADS)

    Alekseev, G. V.; Levin, V. A.

    2016-11-01

    Inverse heat-transfer problems related to constructing special thermal devices such as cloaking shells, thermal-illusion or thermal-camouflage devices, and heat-flux concentrators are studied. The heatdiffusion equation with a variable heat-conductivity coefficient is used as the initial heat-transfer model. An optimization method is used to reduce the above inverse problems to the respective control problem. The solvability of the above control problem is proved, an optimality system that describes necessary extremum conditions is derived, and a numerical algorithm for solving the control problem is proposed.

  11. System design optimization for a Mars-roving vehicle and perturbed-optimal solutions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Pavarini, C.

    1974-01-01

    Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.

  12. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  13. Optimal Control of Evolution Mixed Variational Inclusions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alduncin, Gonzalo, E-mail: alduncin@geofisica.unam.mx

    2013-12-15

    Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory.

  14. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    NASA Astrophysics Data System (ADS)

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  15. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  16. Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.

  17. Literacy Content and Core Practices: Teacher Educator Pedagogy as the Bridge between Knowing and Doing

    ERIC Educational Resources Information Center

    Danielson, Katie A.

    2016-01-01

    Mary Kennedy (1999) introduced the problem of enactment to describe how novice teachers often struggle to put what they have learned in coursework into practice in the field. One approach to this problem is to put practice at the center of teacher education by specifying core practices of teaching around which to structure novices' learning…

  18. Agricultural Business and Management Materials for Agricultural Education Programs. Core Agricultural Education Curriculum, Central Cluster.

    ERIC Educational Resources Information Center

    Illinois Univ., Urbana. Office of Agricultural Communications and Education.

    This curriculum guide contains 5 teaching units for 44 agricultural business and management cluster problem areas. These problem areas have been selected as suggested areas of study to be included in a core curriculum for secondary students enrolled in an agricultural education program. The five units are as follows: (1) agribusiness operation and…

  19. Multi-level Hierarchical Poly Tree computer architectures

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug

    1990-01-01

    Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.

  20. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  1. Shape Optimization of Cylindrical Shell for Interior Noise

    NASA Technical Reports Server (NTRS)

    Robinson, Jay H.

    1999-01-01

    In this paper an analytic method is used to solve for the cross spectral density of the interior acoustic response of a cylinder with nonuniform thickness subjected to turbulent boundary layer excitation. The cylinder is of honeycomb core construction with the thickness of the core material expressed as a cosine series in the circumferential direction. The coefficients of this series are used as the design variable in the optimization study. The objective function is the space and frequency averaged acoustic response. Results confirm the presence of multiple local minima as previously reported and demonstrate the potential for modest noise reduction.

  2. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    NASA Astrophysics Data System (ADS)

    Leggett, C.; Binet, S.; Jackson, K.; Levinthal, D.; Tatarkhanov, M.; Yao, Y.

    2011-12-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  3. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  4. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  5. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  6. Decomposition method for zonal resource allocation problems in telecommunication networks

    NASA Astrophysics Data System (ADS)

    Konnov, I. V.; Kashuba, A. Yu

    2016-11-01

    We consider problems of optimal resource allocation in telecommunication networks. We first give an optimization formulation for the case where the network manager aims to distribute some homogeneous resource (bandwidth) among users of one region with quadratic charge and fee functions and present simple and efficient solution methods. Next, we consider a more general problem for a provider of a wireless communication network divided into zones (clusters) with common capacity constraints. We obtain a convex quadratic optimization problem involving capacity and balance constraints. By using the dual Lagrangian method with respect to the capacity constraint, we suggest to reduce the initial problem to a single-dimensional optimization problem, but calculation of the cost function value leads to independent solution of zonal problems, which coincide with the above single region problem. Some results of computational experiments confirm the applicability of the new methods.

  7. Novel photonics polymer and its application in IT

    NASA Astrophysics Data System (ADS)

    Koike, Yasuhiro

    2003-07-01

    In the field of LANs, transmission systems based on a multimode silica fiber network is heading towards capacities of Gb/s. We have proposed a low-loss, high-bandwidth and large-core graded-index plastic optical fiber (GI POF) in data-com. area. We sill show that GI POF enables to virtually eliminate the "modal noise" problem cased by the medium-core silica fibers. Therefore, stable high-speed data transmission is realized by GI POF rather than silica fibers. Furthermore, advent of perfluorinated (PF) polymer based GI POF network can support higher transmission than silica fibers network because of the small material dispersion of PF polymer compared with silica. In addition, we proposed a "highly scattering optical transmission (HSOT) polymer" and applied it to a light guide plate of a liquid crystal display (LCD) backlight. The advanced HSOT polymer backlight that was proposed using the HSOT designing simulation program demonstrated approximately three times higher luminance than the conventional flat-type HSOT backlight of 14.1-inch diagonal because of the microscopic prism structures at the bottom of the advanced HSOT light guide plate. The HSOT polymer containing the optimized heterogeneous structures produced homogeneous scattered light with forward directivity and sufficient color uniformity.

  8. Magneto-optical nanoparticles for cyclic magnetomotive photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Arnal, Bastien; Yoon, Soon Joon; Li, Junwei; Gao, Xiaohu; O'Donnell, Matthew

    2018-05-01

    Photoacoustic imaging is a highly promising tool to visualize molecular events with deep tissue penetration. Like most other modalities, however, image contrast under in vivo conditions is far from optimal due to background signals from tissue. Using iron oxide-gold core-shell nanoparticles, we previously demonstrated that magnetomotive photoacoustic (mmPA) imaging can dramatically reduce the influence of background signals and produce high-contrast molecular images. Here we report two significant advances toward clinical translation of this technology. First, we introduce a new class of compact, uniform, magneto-optically coupled core-shell nanoparticle, prepared through localized copolymerization of polypyrrole (PPy) on an iron oxide nanoparticle surface. The resulting iron oxide-PPy nanoparticles solve the photo-instability and small-scale synthesis problems previously encountered by the gold coating approach, and extend the large optical absorption coefficient of the particles beyond 1000 nm in wavelength. In parallel, we have developed a new generation of mmPA imaging featuring cyclic magnetic motion and ultrasound speckle tracking, with an image capture frame rate several hundred times faster than the photoacoustic speckle tracking method demonstrated previously. These advances enable robust artifact elimination caused by physiologic motion and first application of the mmPA technology in vivo for sensitive tumor imaging.

  9. Structural test of the parameterized-backbone method for protein design.

    PubMed

    Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom

    2004-09-03

    Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.

  10. Multilevel Summation of Electrostatic Potentials Using Graphics Processing Units*

    PubMed Central

    Hardy, David J.; Stone, John E.; Schulten, Klaus

    2009-01-01

    Physical and engineering practicalities involved in microprocessor design have resulted in flat performance growth for traditional single-core microprocessors. The urgent need for continuing increases in the performance of scientific applications requires the use of many-core processors and accelerators such as graphics processing units (GPUs). This paper discusses GPU acceleration of the multilevel summation method for computing electrostatic potentials and forces for a system of charged atoms, which is a problem of paramount importance in biomolecular modeling applications. We present and test a new GPU algorithm for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3-D lattice of “weights” over all sub-cubes of a much larger lattice. The implementation exploits the different memory subsystems provided on the GPU to stream optimally sized data sets through the multiprocessors. We demonstrate for the full multilevel summation calculation speedups of up to 26 using a single GPU and 46 using multiple GPUs, enabling the computation of a high-resolution map of the electrostatic potential for a system of 1.5 million atoms in under 12 seconds. PMID:20161132

  11. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  12. Stack-and-Draw Manufacture Process of a Seven-Core Optical Fiber for Fluorescence Measurements

    NASA Astrophysics Data System (ADS)

    Samir, Ahmed; Batagelj, Bostjan

    2018-01-01

    Multi-core, optical-fiber technology is expected to be used in telecommunications and sensory systems in a relatively short amount of time. However, a successful transition from research laboratories to industry applications will only be possible with an optimized design and manufacturing process. The fabrication process is an important aspect in designing and developing new multi-applicable, multi-core fibers, where the best candidate is a seven-core fiber. Here, the basics for designing and manufacturing a single-mode, seven-core fiber using the stack-and-draw process is described for the example of a fluorescence sensory system.

  13. Topology-changing shape optimization with the genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lamberson, Steven E., Jr.

    The goal is to take a traditional shape optimization problem statement and modify it slightly to allow for prescribed changes in topology. This modification enables greater flexibility in the choice of parameters for the topology optimization problem, while improving the direct physical relevance of the results. This modification involves changing the optimization problem statement from a nonlinear programming problem into a form of mixed-discrete nonlinear programing problem. The present work demonstrates one possible way of using the Genetic Algorithm (GA) to solve such a problem, including the use of "masking bits" and a new modification to the bit-string affinity (BSA) termination criterion specifically designed for problems with "masking bits." A simple ten-bar truss problem proves the utility of the modified BSA for this type of problem. A more complicated two dimensional bracket problem is solved using both the proposed approach and a more traditional topology optimization approach (Solid Isotropic Microstructure with Penalization or SIMP) to enable comparison. The proposed approach is able to solve problems with both local and global constraints, which is something traditional methods cannot do. The proposed approach has a significantly higher computational burden --- on the order of 100 times larger than SIMP, although the proposed approach is able to offset this with parallel computing.

  14. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  15. A new approach to impulsive rendezvous near circular orbit

    NASA Astrophysics Data System (ADS)

    Carter, Thomas; Humi, Mayer

    2012-04-01

    A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.

  16. Delivering Core Engineering Concepts to Secondary Level Students

    ERIC Educational Resources Information Center

    Merrill, Chris; Custer, Rodney L.; Daugherty, Jenny; Westrick, Martin; Zeng, Yong

    2008-01-01

    Through the efforts of National Center for Engineering and Technology Education (NCETE), three core engineering concepts within the realm of engineering design have emerged as crucial areas of need within secondary level technology education. These concepts are constraints, optimization, and predictive analysis (COPA). COPA appears to be at the…

  17. Ambient temperature response establishes ELF3 as a required component of the Arabidopsis core circadian clock

    USDA-ARS?s Scientific Manuscript database

    Circadian clocks synchronize internal processes with environmental cycles to ensure optimal timing of biological events on daily and seasonal timescales. External light and temperature cues set the core molecular oscillator to local conditions. In Arabidopsis, EARLY FLOWERING 3 (ELF3) is thought to ...

  18. Sudden emergence of q-regular subgraphs in random graphs

    NASA Astrophysics Data System (ADS)

    Pretti, M.; Weigt, M.

    2006-07-01

    We investigate the computationally hard problem whether a random graph of finite average vertex degree has an extensively large q-regular subgraph, i.e., a subgraph with all vertices having degree equal to q. We reformulate this problem as a constraint-satisfaction problem, and solve it using the cavity method of statistical physics at zero temperature. For q = 3, we find that the first large q-regular subgraphs appear discontinuously at an average vertex degree c3 - reg simeq 3.3546 and contain immediately about 24% of all vertices in the graph. This transition is extremely close to (but different from) the well-known 3-core percolation point c3 - core simeq 3.3509. For q > 3, the q-regular subgraph percolation threshold is found to coincide with that of the q-core.

  19. New evidence favoring multilevel decomposition and optimization

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Polignone, Debra A.

    1990-01-01

    The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.

  20. Primer-optimized results and trends for circular phasing and other circle-to-circle impulsive coplanar rendezvous

    NASA Astrophysics Data System (ADS)

    Sandrik, Suzannah

    Optimal solutions to the impulsive circular phasing problem, a special class of orbital maneuver in which impulsive thrusts shift a vehicle's orbital position by a specified angle, are found using primer vector theory. The complexities of optimal circular phasing are identified and illustrated using specifically designed Matlab software tools. Information from these new visualizations is applied to explain discrepancies in locally optimal solutions found by previous researchers. Two non-phasing circle-to-circle impulsive rendezvous problems are also examined to show the applicability of the tools developed here to a broader class of problems and to show how optimizing these rendezvous problems differs from the circular phasing case.

  1. On Born's Conjecture about Optimal Distribution of Charges for an Infinite Ionic Crystal

    NASA Astrophysics Data System (ADS)

    Bétermin, Laurent; Knüpfer, Hans

    2018-04-01

    We study the problem for the optimal charge distribution on the sites of a fixed Bravais lattice. In particular, we prove Born's conjecture about the optimality of the rock salt alternate distribution of charges on a cubic lattice (and more generally on a d-dimensional orthorhombic lattice). Furthermore, we study this problem on the two-dimensional triangular lattice and we prove the optimality of a two-component honeycomb distribution of charges. The results hold for a class of completely monotone interaction potentials which includes Coulomb-type interactions for d≥3 . In a more general setting, we derive a connection between the optimal charge problem and a minimization problem for the translated lattice theta function.

  2. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    NASA Astrophysics Data System (ADS)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William E.; Siirola, John Daniel

    We describe new capabilities for modeling MPEC problems within the Pyomo modeling software. These capabilities include new modeling components that represent complementar- ity conditions, modeling transformations for re-expressing models with complementarity con- ditions in other forms, and meta-solvers that apply transformations and numeric optimization solvers to optimize MPEC problems. We illustrate the breadth of Pyomo's modeling capabil- ities for MPEC problems, and we describe how Pyomo's meta-solvers can perform local and global optimization of MPEC problems.

  4. Genetic algorithms for multicriteria shape optimization of induction furnace

    NASA Astrophysics Data System (ADS)

    Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo

    2012-09-01

    In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.

  5. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    PubMed

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  6. Teaching to the Core: Integrating Implementation of Common Core and Teacher Effectiveness Policies

    ERIC Educational Resources Information Center

    Wiener, Ross

    2013-01-01

    The purpose of the Common Core State Standards is to prepare students to succeed in college and career pursuits. To that end, the Common Core calls on teachers to focus on deepening students' understanding of what they're learning, enhancing their problem-solving skills, and improving their ability to communicate ideas. At the same time, states…

  7. Exploring the effect of nested capillaries on core-cladding mode resonances in hollow-core antiresonant fibers

    NASA Astrophysics Data System (ADS)

    Provino, Laurent; Taunay, Thierry

    2018-02-01

    Optimal suppression of higher-order modes (HOMs) in hollow-core antiresonant fibers comprising a single ring of thin-walled capillaries was previously studied, and can be achieved when the condition on the capillary-tocore diameter ratio is satisfied (d/D ≍ 0.68). Here we report on the conditions for maximizing the leakage losses of HOMs in hollow-core nested antiresonant node-less fibers, while preserving low confinement loss for the fundamental mode. Using an analytical model based on coupled capillary waveguides, as well as full-vector finite element modeling, we show that optimal d/D value leading to high leakage losses of HOMs, is strongly correlated to the size of nested capillaries. We also show that extremely high value of degree of HOM suppression (˜1200) at the resonant coupling is almost unchanged on a wide range of nested capillary diameter dN ested values. These results thus suggest the possibility of designing antiresonant fibers with nested elements, which show optimal guiding performances in terms of the HOM loss compared to that of the fundamental mode, for clearly defined paired values of the ratios dN ested/d and d/D. These can also tend towards a single-mode behavior only when the dimensionless parameter dN ested/d is less than 0.30, with identical wall thicknesses for all of the capillaries.

  8. Optimization in First Semester Calculus: A Look at a Classic Problem

    ERIC Educational Resources Information Center

    LaRue, Renee; Infante, Nicole Engelke

    2015-01-01

    Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…

  9. Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations

    PubMed Central

    Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad

    2013-01-01

    Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194

  10. Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.

    PubMed

    Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee

    2014-10-01

    Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature.

  11. Stochastic Local Search for Core Membership Checking in Hedonic Games

    NASA Astrophysics Data System (ADS)

    Keinänen, Helena

    Hedonic games have emerged as an important tool in economics and show promise as a useful formalism to model multi-agent coalition formation in AI as well as group formation in social networks. We consider a coNP-complete problem of core membership checking in hedonic coalition formation games. No previous algorithms to tackle the problem have been presented. In this work, we overcome this by developing two stochastic local search algorithms for core membership checking in hedonic games. We demonstrate the usefulness of the algorithms by showing experimentally that they find solutions efficiently, particularly for large agent societies.

  12. Multi-core and GPU accelerated simulation of a radial star target imaged with equivalent t-number circular and Gaussian pupils

    NASA Astrophysics Data System (ADS)

    Greynolds, Alan W.

    2013-09-01

    Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.

  13. ATR LEU fuel and burnable absorber neutronics performance optimization by fuel meat thickness variation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, G.S.

    2008-07-15

    The Advanced Test Reactor (ATR) is a high power density and high neutron flux research reactor operating in the United States. Powered with highly enriched uranium (HEU), the ATR has a maximum thermal power rating of 250 MWth. Because of the large test volumes located in high flux areas, the ATR is an ideal candidate for assessing the feasibility of converting an HEU driven reactor to a low-enriched core. The present work investigates the necessary modifications and evaluates the subsequent operating effects of this conversion. A detailed plate-by-plate MCNP ATR 1/8th core model was developed and validated for a fuelmore » cycle burnup comparison analysis. Using the current HEU U-235 enrichment of 93.0 % as a baseline, an analysis can be performed to determine the low-enriched uranium (LEU) density and U-235 enrichment required in the fuel meat to yield an equivalent K-eff between the HEU core th and the LEU core versus effective full power days (EFPD). The MCNP ATR 1/8th core model will be used to optimize the U-235 loading in the LEU core, such that the differences in K-eff and heat flux profile between the HEU and LEU core can be minimized. The depletion methodology MCWO was used to calculate K-eff versus EFPDs in this paper. The MCWO-calculated results for the LEU cases with foil (U-10Mo) types demonstrated adequate excess reactivity such that the K-eff versus EFPDs plot is similar to the reference ATR HEU case. Each HEU fuel element contains 19 fuel plates with a fuel meat thickness of 0.508 mm. In this work, the proposed LEU (U-10Mo) core conversion case with a nominal fuel meat thickness of 0.381 mm and the same U-235 enrichment (19.7 wt%) can be used to optimize the radial heat flux profile by varying the fuel meat thickness from 0.191 mm (7.5 mil) to 0.343 mm (13.5 mil) at the inner 4 fuel plates (1-4) and outer 4 fuel plates (16-19). In addition, 0.8g of a burnable absorber, Boron-10, was added in the inner and outer plates to reduce the initial excess reactivity, and the inner/outer heat flux more effectively. The optimized LEU relative radial fission heat flux profile is bounded by the reference ATR HEU case. However, to demonstrate that the LEU core fuel cycle performance can meet the Updated Final Safety Analysis Report (UFSAR) safety requirements, additional studies will be necessary to evaluate and compare safety parameters such as void reactivity and Doppler coefficients, control components worth (outer shim control cylinders, safety rods and regulating rod), and shutdown margins between the HEU and LEU cores. (author)« less

  14. Multi-GPU configuration of 4D intensity modulated radiation therapy inverse planning using global optimization

    NASA Astrophysics Data System (ADS)

    Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo

    2018-01-01

    We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.

  15. Multi-GPU configuration of 4D intensity modulated radiation therapy inverse planning using global optimization.

    PubMed

    Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo

    2018-01-16

    We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.

  16. On the role of the optimization algorithm of RapidArc(®) volumetric modulated arc therapy on plan quality and efficiency.

    PubMed

    Vanetti, Eugenio; Nicolini, Giorgia; Nord, Janne; Peltola, Jarkko; Clivio, Alessandro; Fogliata, Antonella; Cozzi, Luca

    2011-11-01

    The RapidArc volumetric modulated arc therapy (VMAT) planning process is based on a core engine, the so-called progressive resolution optimizer (PRO). This is the optimization algorithm used to determine the combination of field shapes, segment weights (with dose rate and gantry speed variations), which best approximate the desired dose distribution in the inverse planning problem. A study was performed to assess the behavior of two versions of PRO. These two versions mostly differ in the way continuous variables describing the modulated arc are sampled into discrete control points, in the planning efficiency and in the presence of some new features. The analysis aimed to assess (i) plan quality, (ii) technical delivery aspects, (iii) agreement between delivery and calculations, and (iv) planning efficiency of the two versions. RapidArc plans were generated for four groups of patients (five patients each): anal canal, advanced lung, head and neck, and multiple brain metastases and were designed to test different levels of planning complexity and anatomical features. Plans from optimization with PRO2 (first generation of RapidArc optimizer) were compared against PRO3 (second generation of the algorithm). Additional plans were optimized with PRO3 using new features: the jaw tracking, the intermediate dose and the air cavity correction options. Results showed that (i) plan quality was generally improved with PRO3 and, although not for all parameters, some of the scored indices showed a macroscopic improvement with PRO3. (ii) PRO3 optimization leads to simpler patterns of the dynamic parameters particularly for dose rate. (iii) No differences were observed between the two algorithms in terms of pretreatment quality assurance measurements and (iv) PRO3 optimization was generally faster, with a time reduction of a factor approximately 3.5 with respect to PRO2. These results indicate that PRO3 is either clinically beneficial or neutral in terms of dosimetric quality while it showed significant advantages in speed and technical aspects.

  17. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  18. The Sizing and Optimization Language, (SOL): Computer language for design problems

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  19. Optimization of composite sandwich cover panels subjected to compressive loadings

    NASA Technical Reports Server (NTRS)

    Cruz, Juan R.

    1991-01-01

    An analysis and design method is presented for the design of composite sandwich cover panels that include the transverse shear effects and damage tolerance considerations. This method is incorporated into a sandwich optimization computer program entitled SANDOP. As a demonstration of its capabilities, SANDOP is used in the present study to design optimized composite sandwich cover panels for for transport aircraft wing applications. The results of this design study indicate that optimized composite sandwich cover panels have approximately the same structural efficiency as stiffened composite cover panels designed to satisfy individual constraints. The results also indicate that inplane stiffness requirements have a large effect on the weight of these composite sandwich cover panels at higher load levels. Increasing the maximum allowable strain and the upper percentage limit of the 0 degree and +/- 45 degree plies can yield significant weight savings. The results show that the structural efficiency of these optimized composite sandwich cover panels is relatively insensitive to changes in core density. Thus, core density should be chosen by criteria other than minimum weight (e.g., damage tolerance, ease of manufacture, etc.).

  20. The optimal community detection of software based on complex networks

    NASA Astrophysics Data System (ADS)

    Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong

    2016-02-01

    The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.

  1. Optimal ballistically captured Earth-Moon transfers

    NASA Astrophysics Data System (ADS)

    Ricord Griesemer, Paul; Ocampo, Cesar; Cooley, D. S.

    2012-07-01

    The optimality of a low-energy Earth-Moon transfer terminating in ballistic capture is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the problem is then modified to fix the time of transfer, allowing for optimal multi-impulse transfers. The tradeoff between transfer time and fuel cost is shown for Earth-Moon ballistic lunar capture transfers.

  2. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.

  3. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  4. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  5. Escript: Open Source Environment For Solving Large-Scale Geophysical Joint Inversion Problems in Python

    NASA Astrophysics Data System (ADS)

    Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy

    2014-05-01

    The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for inversion and appropriate solution schemes in escript. We will also give a brief introduction into escript's open framework for defining and solving geophysical inversion problems. Finally we will show some benchmark results to demonstrate the computational scalability of the inversion method across a large number of cores and compute nodes in a parallel computing environment. References: - L. Gross et al. (2013): Escript Solving Partial Differential Equations in Python Version 3.4, The University of Queensland, https://launchpad.net/escript-finley - L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306 - T. Poulet, L. Gross, D. Georgiev, J. Cleverley (2012): escript-RT: Reactive transport simulation in Python using escript, Computers & Geosciences, Volume 45, 168-176. http://dx.doi.org/10.1016/j.cageo.2011.11.005.

  6. Dynamic Programming and Graph Algorithms in Computer Vision*

    PubMed Central

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  7. Representations in Problem Solving: A Case Study with Optimization Problems

    ERIC Educational Resources Information Center

    Villegas, Jose L.; Castro, Enrique; Gutierrez, Jose

    2009-01-01

    Introduction: Representations play an essential role in mathematical thinking. They favor the understanding of mathematical concepts and stimulate the development of flexible and versatile thinking in problem solving. Here our focus is on their use in optimization problems, a type of problem considered important in mathematics teaching and…

  8. Class and Home Problems: Optimization Problems

    ERIC Educational Resources Information Center

    Anderson, Brian J.; Hissam, Robin S.; Shaeiwitz, Joseph A.; Turton, Richard

    2011-01-01

    Optimization problems suitable for all levels of chemical engineering students are available. These problems do not require advanced mathematical techniques, since they can be solved using typical software used by students and practitioners. The method used to solve these problems forces students to understand the trends for the different terms…

  9. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    PubMed

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  10. An analytical study of reduced-gravity liquid reorientation using a simplified marker and cell technique

    NASA Technical Reports Server (NTRS)

    Betts, W. S., Jr.

    1972-01-01

    A computer program called HOPI was developed to predict reorientation flow dynamics, wherein liquids move from one end of a closed, partially filled, rigid container to the other end under the influence of container acceleration. The program uses the simplified marker and cell numerical technique and, using explicit finite-differencing, solves the Navier-Stokes equations for an incompressible viscous fluid. The effects of turbulence are also simulated in the program. HOPI can consider curved as well as straight walled boundaries. Both free-surface and confined flows can be calculated. The program was used to simulate five liquid reorientation cases. Three of these cases simulated actual NASA LeRC drop tower test conditions while two cases simulated full-scale Centaur tank conditions. It was concluded that while HOPI can be used to analytically determine the fluid motion in a typical settling problem, there is a current need to optimize HOPI. This includes both reducing the computer usage time and also reducing the core storage required for a given size problem.

  11. Automated simultaneous multiple feature classification of MTI data

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.

    2002-08-01

    Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.

  12. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.

  13. A robust optimization methodology for preliminary aircraft design

    NASA Astrophysics Data System (ADS)

    Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.

    2016-05-01

    This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.

  14. New trends in astrodynamics and applications: optimal trajectories for space guidance.

    PubMed

    Azimov, Dilmurat; Bishop, Robert

    2005-12-01

    This paper represents recent results on the development of optimal analytic solutions to the variation problem of trajectory optimization and their application in the construction of on-board guidance laws. The importance of employing the analytically integrated trajectories in a mission design is discussed. It is assumed that the spacecraft is equipped with a power-limited propulsion and moving in a central Newtonian field. Satisfaction of the necessary and sufficient conditions for optimality of trajectories is analyzed. All possible thrust arcs and corresponding classes of the analytical solutions are classified based on the propulsion system parameters and performance index of the problem. The solutions are presented in a form convenient for applications in escape, capture, and interorbital transfer problems. Optimal guidance and neighboring optimal guidance problems are considered. It is shown that the analytic solutions can be used as reference trajectories in constructing the guidance algorithms for the maneuver problems mentioned above. An illustrative example of a spiral trajectory that terminates on a given elliptical parking orbit is discussed.

  15. Rapid optimization of multiple-burn rocket flights.

    NASA Technical Reports Server (NTRS)

    Brown, K. R.; Harrold, E. F.; Johnson, G. W.

    1972-01-01

    Different formulations of the fuel optimization problem for multiple burn trajectories are considered. It is shown that certain customary idealizing assumptions lead to an ill-posed optimization problem for which no solution exists. Several ways are discussed for avoiding such difficulties by more realistic problem statements. An iterative solution of the boundary value problem is presented together with efficient coast arc computations, the right end conditions for various orbital missions, and some test results.

  16. Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.

    PubMed

    Ruymgaart, A Peter; Elber, Ron

    2012-11-13

    We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).

  17. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems

    PubMed Central

    Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224

  18. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems.

    PubMed

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

  19. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  20. The Role of Intuition in the Solving of Optimization Problems

    ERIC Educational Resources Information Center

    Malaspina, Uldarico; Font, Vicenc

    2010-01-01

    This article presents the partial results obtained in the first stage of the research, which sought to answer the following questions: (a) What is the role of intuition in university students' solutions to optimization problems? (b) What is the role of rigor in university students' solutions to optimization problems? (c) How is the combination of…

  1. What Are the Core Elements of Your Curriculum?

    ERIC Educational Resources Information Center

    Exchange: The Early Childhood Leaders' Magazine Since 1978, 2009

    2009-01-01

    Several administrators discuss the core elements of their curriculum. These core elements are: (1) Child-centered; (2) Play; (3) Problem solving; (4) Respect; (5)Creativity; (6) Community; (7) Independence; (8) Curiosity; (9) Love of learning; (10) Relationship; (11) Cooperation; (12) Self-confidence; (13) Language; (14) Joy; (15) Nature; Natural…

  2. Parametric optimal control of uncertain systems under an optimistic value criterion

    NASA Astrophysics Data System (ADS)

    Li, Bo; Zhu, Yuanguo

    2018-01-01

    It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.

  3. Misfit stresses in a composite core-shell nanowire with an eccentric parallelepipedal core subjected to one-dimensional cross dilatation eigenstrain

    NASA Astrophysics Data System (ADS)

    Krasnitckii, S. A.; Kolomoetc, D. R.; Smirnov, A. M.; Gutkin, M. Yu

    2017-03-01

    We present an analytical solution to the boundary-value problem in the classical theory of elasticity for a core-shell nanowire with an eccentric parallelepipedal core of an arbitrary rectangular cross section. The core is subjected to one-dimensional cross dilatation eigenstrain. The misfit stresses are found in a concise and transparent closed form which is convenient for practical use in theoretical modeling of misfit relaxation processes.

  4. Parameter estimation using meta-heuristics in systems biology: a comprehensive review.

    PubMed

    Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie

    2012-01-01

    This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.

  5. Optimal Control for Stochastic Delay Evolution Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Qingxin, E-mail: mqx@hutc.zj.cn; Shen, Yang, E-mail: skyshen87@gmail.com

    2016-08-15

    In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we applymore » stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.« less

  6. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  7. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.

    PubMed

    Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

  8. Optimal Control of Thermo--Fluid Phenomena in Variable Domains

    NASA Astrophysics Data System (ADS)

    Volkov, Oleg; Protas, Bartosz

    2008-11-01

    This presentation concerns our continued research on adjoint--based optimization of viscous incompressible flows (the Navier--Stokes problem) coupled with heat conduction involving change of phase (the Stefan problem), and occurring in domains with variable boundaries. This problem is motivated by optimization of advanced welding techniques used in automotive manufacturing, where the goal is to determine an optimal heat input, so as to obtain a desired shape of the weld pool surface upon solidification. We argue that computation of sensitivities (gradients) in such free--boundary problems requires the use of the shape--differential calculus as a key ingredient. We also show that, with such tools available, the computational solution of the direct and inverse (optimization) problems can in fact be achieved in a similar manner and in a comparable computational time. Our presentation will address certain mathematical and computational aspects of the method. As an illustration we will consider the two--phase Stefan problem with contact point singularities where our approach allows us to obtain a thermodynamically consistent solution.

  9. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem

    PubMed Central

    Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849

  10. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  11. Program to Optimize Simulated Trajectories (POST). Volume 3: Programmer's manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to the programmer and relating to the program to optimize simulated trajectories (POST) is presented. Topics discussed include: program structure and logic, subroutine listings and flow charts, and internal FORTRAN symbols. The POST core requirements are summarized along with program macrologic.

  12. Developing core collections to optimize the management and the exploitation of diversity of the coffee Coffea canephora.

    PubMed

    Leroy, Thierry; De Bellis, Fabien; Legnate, Hyacinthe; Musoli, Pascal; Kalonji, Adrien; Loor Solórzano, Rey Gastón; Cubry, Philippe

    2014-06-01

    The management of diversity for conservation and breeding is of great importance for all plant species and is particularly true in perennial species, such as the coffee Coffea canephora. This species exhibits a large genetic and phenotypic diversity with six different diversity groups. Large field collections are available in the Ivory Coast, Uganda and other Asian, American and African countries but are very expensive and time consuming to establish and maintain in large areas. We propose to improve coffee germplasm management through the construction of genetic core collections derived from a set of 565 accessions that are characterized with 13 microsatellite markers. Core collections of 12, 24 and 48 accessions were defined using two methods aimed to maximize the allelic diversity (Maximization strategy) or genetic distance (Maximum-Length Sub-Tree method). A composite core collection of 77 accessions is proposed for both objectives of an optimal management of diversity and breeding. This core collection presents a gene diversity value of 0.8 and exhibits the totality of the major alleles (i.e., 184) that are present in the initial set. The seven proposed core collections constitute a valuable tool for diversity management and a foundation for breeding programs. The use of these collections for collection management in research centers and breeding perspectives for coffee improvement are discussed.

  13. Optimal rail container shipment planning problem in multimodal transportation

    NASA Astrophysics Data System (ADS)

    Cao, Chengxuan; Gao, Ziyou; Li, Keping

    2012-09-01

    The optimal rail container shipment planning problem in multimodal transportation is studied in this article. The characteristics of the multi-period planning problem is presented and the problem is formulated as a large-scale 0-1 integer programming model, which maximizes the total profit generated by all freight bookings accepted in a multi-period planning horizon subject to the limited capacities. Two heuristic algorithms are proposed to obtain an approximate optimal solution of the problem. Finally, numerical experiments are conducted to demonstrate the proposed formulation and heuristic algorithms.

  14. Vulnerable Atherosclerotic Plaque Elasticity Reconstruction Based on a Segmentation-Driven Optimization Procedure Using Strain Measurements: Theoretical Framework

    PubMed Central

    Le Floc’h, Simon; Tracqui, Philippe; Finet, Gérard; Gharib, Ahmed M.; Maurice, Roch L.; Cloutier, Guy; Pettigrew, Roderic I.

    2016-01-01

    It is now recognized that prediction of the vulnerable coronary plaque rupture requires not only an accurate quantification of fibrous cap thickness and necrotic core morphology but also a precise knowledge of the mechanical properties of plaque components. Indeed, such knowledge would allow a precise evaluation of the peak cap-stress amplitude, which is known to be a good biomechanical predictor of plaque rupture. Several studies have been performed to reconstruct a Young’s modulus map from strain elastograms. It seems that the main issue for improving such methods does not rely on the optimization algorithm itself, but rather on preconditioning requiring the best estimation of the plaque components’ contours. The present theoretical study was therefore designed to develop: 1) a preconditioning model to extract the plaque morphology in order to initiate the optimization process, and 2) an approach combining a dynamic segmentation method with an optimization procedure to highlight the modulogram of the atherosclerotic plaque. This methodology, based on the continuum mechanics theory prescribing the strain field, was successfully applied to seven intravascular ultrasound coronary lesion morphologies. The reconstructed cap thickness, necrotic core area, calcium area, and the Young’s moduli of the calcium, necrotic core, and fibrosis were obtained with mean relative errors of 12%, 4% and 1%, 43%, 32%, and 2%, respectively. PMID:19164080

  15. Core-shell alginate-ghatti gum modified montmorillonite composite matrices for stomach-specific flurbiprofen delivery.

    PubMed

    Bera, Hriday; Ippagunta, Sohitha Reddy; Kumar, Sanoj; Vangala, Pavani

    2017-07-01

    Novel alginate-arabic gum (AG) gel membrane coated alginate-ghatti gum (GG) modified montmorillonite (MMT) composite matrices were developed for intragastric flurbiprofen (FLU) delivery by combining floating and mucoadhesion mechanisms. The clay-biopolymer composite matrices containing FLU as core were accomplished by ionic-gelation technique. Effects of polymer-blend (alginate:GG) ratios and crosslinker (CaCl 2 ) concentrations on drug entrapment efficiency (DEE, %) and cumulative drug release after 8h (Q 8h , %) were studied to optimize the core matrices by a 3 2 factorial design. The optimized matrices (F-O) demonstrated DEE of 91.69±1.43% and Q 8h of 74.96±1.56% with minimum errors in prediction. The alginate-AG gel membrane enveloped optimized matrices (F-O, coated) exhibited superior buoyancy, better ex vivo mucoadhesion and slower drug release rate. The drug release profile of FLU-loaded uncoated and coated optimized matrices was best fitted in Korsmeyer-Peppas model with anomalous diffusion and case-II transport driven mechanism, respectively. The uncoated and coated matrices containing FLU were also characterized for drug-excipients compatibility, drug crystallinity, thermal behaviour and surface morphology. Thus, the newly developed alginate-AG gel membrane coated alginate-GG modified MMT composite matrices are appropriate for intragastric delivery of FLU over an extended period of time with improved therapeutic benefits. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Stochastic Optimization For Water Resources Allocation

    NASA Astrophysics Data System (ADS)

    Yamout, G.; Hatfield, K.

    2003-12-01

    For more than 40 years, water resources allocation problems have been addressed using deterministic mathematical optimization. When data uncertainties exist, these methods could lead to solutions that are sub-optimal or even infeasible. While optimization models have been proposed for water resources decision-making under uncertainty, no attempts have been made to address the uncertainties in water allocation problems in an integrated approach. This paper presents an Integrated Dynamic, Multi-stage, Feedback-controlled, Linear, Stochastic, and Distributed parameter optimization approach to solve a problem of water resources allocation. It attempts to capture (1) the conflict caused by competing objectives, (2) environmental degradation produced by resource consumption, and finally (3) the uncertainty and risk generated by the inherently random nature of state and decision parameters involved in such a problem. A theoretical system is defined throughout its different elements. These elements consisting mainly of water resource components and end-users are described in terms of quantity, quality, and present and future associated risks and uncertainties. Models are identified, modified, and interfaced together to constitute an integrated water allocation optimization framework. This effort is a novel approach to confront the water allocation optimization problem while accounting for uncertainties associated with all its elements; thus resulting in a solution that correctly reflects the physical problem in hand.

  17. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  18. A core outcome set for localised prostate cancer effectiveness trials: protocol for a systematic review of the literature and stakeholder involvement through interviews and a Delphi survey.

    PubMed

    MacLennan, Steven; Bekema, Hendrika J; Williamson, Paula R; Campbell, Marion K; Stewart, Fiona; MacLennan, Sara J; N'Dow, James M O; Lam, Thomas B L

    2015-03-04

    Prostate cancer is a growing health problem worldwide. The management of localised prostate cancer is controversial. It is unclear which of several surgical, radiotherapeutic, ablative, and surveillance treatments is the most effective. All have cost, process and recovery, and morbidity implications which add to treatment decision-making complexity for patients and healthcare professionals. Evidence from randomised controlled trials (RCTs) is not optimal because of uncertainty as to what constitutes important outcomes. Another issue hampering evidence synthesis is heterogeneity of outcome definition, measurement, and reporting. This project aims to determine which outcomes are the most important to patients and healthcare professionals, and use these findings to recommend a standardised core outcome set for comparative effectiveness trials of treatments for localised prostate cancer, to optimise decision-making. The range of potentially important outcomes and measures will be identified through systematic reviews of the literature and semi-structured interviews with patients. A consultation exercise involving representatives from two key stakeholder groups (patients and healthcare professionals) will ratify the list of outcomes to be entered into a three round Delphi study. The Delphi process will refine and prioritise the list of identified outcomes. A methodological substudy (nested RCT design) will also be undertaken. Participants will be randomised after round one of the Delphi study to one of three feedback groups, based on different feedback strategies, in order to explore the potential impact of feedback strategies on participant responses. This may assist the design of a future core outcome set and Delphi studies. Following the Delphi study, a final consensus meeting attended by representatives from both stakeholder groups will determine the final recommended core outcome set. This study will inform clinical practice and future trials of interventions of localised prostate cancer by standardising a core outcome set which should be considered in comparative effectiveness studies for localised prostate cancer.

  19. Application of GA, PSO, and ACO algorithms to path planning of autonomous underwater vehicles

    NASA Astrophysics Data System (ADS)

    Aghababa, Mohammad Pourmahmood; Amrollahi, Mohammad Hossein; Borjkhani, Mehdi

    2012-09-01

    In this paper, an underwater vehicle was modeled with six dimensional nonlinear equations of motion, controlled by DC motors in all degrees of freedom. Near-optimal trajectories in an energetic environment for underwater vehicles were computed using a numerical solution of a nonlinear optimal control problem (NOCP). An energy performance index as a cost function, which should be minimized, was defined. The resulting problem was a two-point boundary value problem (TPBVP). A genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) algorithms were applied to solve the resulting TPBVP. Applying an Euler-Lagrange equation to the NOCP, a conjugate gradient penalty method was also adopted to solve the TPBVP. The problem of energetic environments, involving some energy sources, was discussed. Some near-optimal paths were found using a GA, PSO, and ACO algorithms. Finally, the problem of collision avoidance in an energetic environment was also taken into account.

  20. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

Top