Sample records for optimization technique called

  1. Optimization technique for problems with an inequality constraint

    NASA Technical Reports Server (NTRS)

    Russell, K. J.

    1972-01-01

    General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.

  2. Investigation on the use of optimization techniques for helicopter airframe vibrations design studies

    NASA Technical Reports Server (NTRS)

    Sreekanta Murthy, T.

    1992-01-01

    Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.

  3. How to mathematically optimize drug regimens using optimal control.

    PubMed

    Moore, Helen

    2018-02-01

    This article gives an overview of a technique called optimal control, which is used to optimize real-world quantities represented by mathematical models. I include background information about the historical development of the technique and applications in a variety of fields. The main focus here is the application to diseases and therapies, particularly the optimization of combination therapies, and I highlight several such examples. I also describe the basic theory of optimal control, and illustrate each of the steps with an example that optimizes the doses in a combination regimen for leukemia. References are provided for more complex cases. The article is aimed at modelers working in drug development, who have not used optimal control previously. My goal is to make this technique more accessible in the biopharma community.

  4. Dynamic Optimization

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.

  5. Distributed Generation Planning using Peer Enhanced Multi-objective Teaching-Learning based Optimization in Distribution Networks

    NASA Astrophysics Data System (ADS)

    Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth

    2017-04-01

    In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.

  6. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  7. Optimization of Online Searching by Pre-Recording the Search Statements: A Technique for the HP-2645A Terminal.

    ERIC Educational Resources Information Center

    Oberhauser, O. C.; Stebegg, K.

    1982-01-01

    Describes the terminal's capabilities, ways to store and call up lines of statements, cassette tapes needed during searches, and master tape's use for login storage. Advantages of the technique and two sources are listed. (RBF)

  8. Optimized Determination of Deployable Consumable Spares Packages

    DTIC Science & Technology

    2007-06-01

    also called deployable bench stock) • CRSP = Consumable Readiness Spares Package • COLT = Customer -Oriented Leveling Technique • ASM = Aircraft...changed please list both.) Original title on 712 A/B: Optimized Determination of Deployable Consumable Spares Packages If the title was revised...number. 1. REPORT DATE 01 JUN 2007 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Optimized Determination of Deployable Consumable

  9. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  10. Particle swarm optimization with recombination and dynamic linkage discovery.

    PubMed

    Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung

    2007-12-01

    In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.

  11. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  12. Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent

    PubMed Central

    De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle

    2018-01-01

    Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770

  13. Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

    USGS Publications Warehouse

    Safak, Erdal

    1989-01-01

    This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

  14. ToTem: a tool for variant calling pipeline optimization.

    PubMed

    Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka

    2018-06-26

    High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at  https://totem.software .

  15. Improvement to the scanning electron microscope image adaptive Canny optimization colorization by pseudo-mapping.

    PubMed

    Lo, T Y; Sim, K S; Tso, C P; Nia, M E

    2014-01-01

    An improvement to the previously proposed adaptive Canny optimization technique for scanning electron microscope image colorization is reported. The additional feature, called pseudo-mapping technique, is that the grayscale markings are temporarily mapped to a set of pre-defined pseudo-color map as a mean to instill color information for grayscale colors in chrominance channels. This allows the presence of grayscale markings to be identified; hence optimization colorization of grayscale colors is made possible. This additional feature enhances the flexibility of scanning electron microscope image colorization by providing wider range of possible color enhancement. Furthermore, the nature of this technique also allows users to adjust the luminance intensities of selected region from the original image within certain extent. © 2014 Wiley Periodicals, Inc.

  16. Applications of Evolutionary Technology to Manufacturing and Logistics Systems : State-of-the Art Survey

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin

    Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.

  17. Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm

    NASA Astrophysics Data System (ADS)

    Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi

    2014-01-01

    This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.

  18. An Application of the A* Search to Trajectory Optimization

    DTIC Science & Technology

    1990-05-11

    linearized model of orbital motion called the Clohessy - Wiltshire Equations and a node search technique called A*. The planner discussed in this thesis starts...states while transfer time is left unspecified. 13 Chapter 2. Background HILL’S ( CLOHESSY - WILTSHIRE ) EQUATIONS The Euler-Hill equations describe... Clohessy - Wiltshire equations. The coordinate system used in this thesis is commonly referred to as Local Vertical, Local Horizontal or LVLH reference frame

  19. Command and Control of Teams of Autonomous Units

    DTIC Science & Technology

    2012-06-01

    done by a hybrid genetic algorithm (GA) particle swarm optimization ( PSO ) algorithm called PIDGION-alternate. This training algorithm is an ANN ...human controller will recognize the behaviors as being safe and correct. As the HyperNEAT approach produces Artificial Neural Nets ( ANN ), we can...optimization technique that generates efficient ANN controls from simple environmental feedback. FALCONET has been tested showing that it can produce

  20. Portable parallel portfolio optimization in the Aurora Financial Management System

    NASA Astrophysics Data System (ADS)

    Laure, Erwin; Moritsch, Hans

    2001-07-01

    Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.

  1. A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Ilchi Ghazaan, M.

    2018-02-01

    In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.

  2. Computer programs for generation and evaluation of near-optimum vertical flight profiles

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Waters, M. H.; Patmore, L. C.

    1983-01-01

    Two extensive computer programs were developed. The first, called OPTIM, generates a reference near-optimum vertical profile, and it contains control options so that the effects of various flight constraints on cost performance can be examined. The second, called TRAGEN, is used to simulate an aircraft flying along an optimum or any other vertical reference profile. TRAGEN is used to verify OPTIM's output, examine the effects of uncertainty in the values of parameters (such as prevailing wind) which govern the optimum profile, or compare the cost performance of profiles generated by different techniques. A general description of these programs, the efforts to add special features to them, and sample results of their usage are presented.

  3. Architecture and settings optimization procedure of a TES frequency domain multiplexed readout firmware

    NASA Astrophysics Data System (ADS)

    Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.

    2014-11-01

    IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.

  4. Optimizing Sensor and Actuator Arrays for ASAC Noise Control

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Cabell, Ran

    2000-01-01

    This paper summarizes the development of an approach to optimizing the locations for arrays of sensors and actuators in active noise control systems. A type of directed combinatorial search, called Tabu Search, is used to select an optimal configuration from a much larger set of candidate locations. The benefit of using an optimized set is demonstrated. The importance of limiting actuator forces to realistic levels when evaluating the cost function is discussed. Results of flight testing an optimized system are presented. Although the technique has been applied primarily to Active Structural Acoustic Control systems, it can be adapted for use in other active noise control implementations.

  5. Double synchronized switch harvesting (DSSH): a new energy harvesting scheme for efficient energy extraction.

    PubMed

    Lallart, Mickaël; Garbuio, Lauric; Petit, Lionel; Richard, Claude; Guyomar, Daniel

    2008-10-01

    This paper presents a new technique for optimized energy harvesting using piezoelectric microgenerators called double synchronized switch harvesting (DSSH). This technique consists of a nonlinear treatment of the output voltage of the piezoelectric element. It also integrates an intermediate switching stage that ensures an optimal harvested power whatever the load connected to the microgenerator. Theoretical developments are presented considering either constant vibration magnitude, constant driving force, or independent extraction. Then experimental measurements are carried out to validate the theoretical predictions. This technique exhibits a constant output power for a wide range of load connected to the microgenerator. In addition, the extracted power obtained using such a technique allows a gain up to 500% in terms of maximal power output compared with the standard energy harvesting method. It is also shown that such a technique allows a fine-tuning of the trade-off between vibration damping and energy harvesting.

  6. Comparative Analysis of Sequential Proximal Optimizing Technique Versus Kissing Balloon Inflation Technique in Provisional Bifurcation Stenting: Fractal Coronary Bifurcation Bench Test.

    PubMed

    Finet, Gérard; Derimay, François; Motreff, Pascal; Guerin, Patrice; Pilet, Paul; Ohayon, Jacques; Darremont, Olivier; Rioufol, Gilles

    2015-08-24

    This study used a fractal bifurcation bench model to compare 6 optimization sequences for coronary bifurcation provisional stenting, including 1 novel sequence without kissing balloon inflation (KBI), comprising initial proximal optimizing technique (POT) + side-branch inflation (SBI) + final POT, called "re-POT." In provisional bifurcation stenting, KBI fails to improve the rate of major adverse cardiac events. Proximal geometric deformation increases the rate of in-stent restenosis and target lesion revascularization. A bifurcation bench model was used to compare KBI alone, KBI after POT, KBI with asymmetric inflation pressure after POT, and 2 sequences without KBI: initial POT plus SBI, and initial POT plus SBI with final POT (called "re-POT"). For each protocol, 5 stents were tested using 2 different drug-eluting stent designs: that is, a total of 60 tests. Compared with the classic KBI-only sequence and those associating POT with modified KBI, the re-POT sequence gave significantly (p < 0.05) better geometric results: it reduced SB ostium stent-strut obstruction from 23.2 ± 6.0% to 5.6 ± 8.3%, provided perfect proximal stent apposition with almost perfect circularity (ellipticity index reduced from 1.23 ± 0.02 to 1.04 ± 0.01), reduced proximal area overstretch from 24.2 ± 7.6% to 8.0 ± 0.4%, and reduced global strut malapposition from 40 ± 6.2% to 2.6 ± 1.4%. In comparison with 5 other techniques, the re-POT sequence significantly optimized the final result of provisional coronary bifurcation stenting, maintaining circular geometry while significantly reducing SB ostium strut obstruction and global strut malapposition. These experimental findings confirm that provisional stenting may be optimized more effectively without KBI using re-POT. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  7. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    PubMed

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.

  8. Parameter estimation using meta-heuristics in systems biology: a comprehensive review.

    PubMed

    Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie

    2012-01-01

    This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.

  9. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.

    PubMed

    Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

  10. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem

    PubMed Central

    Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849

  11. Efficiency in Second Language Vocabulary Learning

    ERIC Educational Resources Information Center

    Schuetze, Ulf

    2017-01-01

    An ongoing question in second language vocabulary learning is how to optimize the acquisition of words. One approach is the so-called "spaced repetition technique" that uses intervals to repeat words in a given time frame (Balota et al., 2007; Leitner, 1972; Oxford, 1990; Pimsleur, 1967; Roediger & Karpicke, 2010; Schuetze &…

  12. Path Planning For A Class Of Cutting Operations

    NASA Astrophysics Data System (ADS)

    Tavora, Jose

    1989-03-01

    Optimizing processing time in some contour-cutting operations requires solving the so-called no-load path problem. This problem is formulated and an approximate resolution method (based on heuristic search techniques) is described. Results for real-life instances (clothing layouts in the apparel industry) are presented and evaluated.

  13. RJMCMC based Text Placement to Optimize Label Placement and Quantity

    NASA Astrophysics Data System (ADS)

    Touya, Guillaume; Chassin, Thibaud

    2018-05-01

    Label placement is a tedious task in map design, and its automation has long been a goal for researchers in cartography, but also in computational geometry. Methods that search for an optimal or nearly optimal solution that satisfies a set of constraints, such as label overlapping, have been proposed in the literature. Most of these methods mainly focus on finding the optimal position for a given set of labels, but rarely allow the removal of labels as part of the optimization. This paper proposes to apply an optimization technique called Reversible-Jump Markov Chain Monte Carlo that enables to easily model the removal or addition during the optimization iterations. The method, quite preliminary for now, is tested on a real dataset, and the first results are encouraging.

  14. A technique to remove the tensile instability in weakly compressible SPH

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyang; Yu, Peng

    2018-01-01

    When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.

  15. Efficient massively parallel simulation of dynamic channel assignment schemes for wireless cellular communications

    NASA Technical Reports Server (NTRS)

    Greenberg, Albert G.; Lubachevsky, Boris D.; Nicol, David M.; Wright, Paul E.

    1994-01-01

    Fast, efficient parallel algorithms are presented for discrete event simulations of dynamic channel assignment schemes for wireless cellular communication networks. The driving events are call arrivals and departures, in continuous time, to cells geographically distributed across the service area. A dynamic channel assignment scheme decides which call arrivals to accept, and which channels to allocate to the accepted calls, attempting to minimize call blocking while ensuring co-channel interference is tolerably low. Specifically, the scheme ensures that the same channel is used concurrently at different cells only if the pairwise distances between those cells are sufficiently large. Much of the complexity of the system comes from ensuring this separation. The network is modeled as a system of interacting continuous time automata, each corresponding to a cell. To simulate the model, conservative methods are used; i.e., methods in which no errors occur in the course of the simulation and so no rollback or relaxation is needed. Implemented on a 16K processor MasPar MP-1, an elegant and simple technique provides speedups of about 15 times over an optimized serial simulation running on a high speed workstation. A drawback of this technique, typical of conservative methods, is that processor utilization is rather low. To overcome this, new methods were developed that exploit slackness in event dependencies over short intervals of time, thereby raising the utilization to above 50 percent and the speedup over the optimized serial code to about 120 times.

  16. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  17. Approximation algorithms for a genetic diagnostics problem.

    PubMed

    Kosaraju, S R; Schäffer, A A; Biesecker, L G

    1998-01-01

    We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.

  18. Neural dynamic programming and its application to control systems

    NASA Astrophysics Data System (ADS)

    Seong, Chang-Yun

    There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.

  19. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm

    PubMed Central

    Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396

  20. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm.

    PubMed

    Islam, Md Mainul; Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.

  1. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  2. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  3. Development of Optimized Combustors and Thermoelectric Generators for Palm Power Generation

    DTIC Science & Technology

    2004-10-26

    manufacturing techniques and microfabrication, on the chemical kinetics of JP-8 surrogates and on the development of advanced laser diagnostics for JP-8...takes the shape of a cone from the tip of which a thin liquid thread emerges, in the so-called cone-jet mode [1]. This microjet breaks into a stream of...combustion systems. 2. The development of a diagnostic technique based on two-color laser induced fluorescence from fluorescence tags added to the fuel

  4. Performance of Optimized Actuator and Sensor Arrays in an Active Noise Control System

    NASA Technical Reports Server (NTRS)

    Palumbo, D. L.; Padula, S. L.; Lyle, K. H.; Cline, J. H.; Cabell, R. H.

    1996-01-01

    Experiments have been conducted in NASA Langley's Acoustics and Dynamics Laboratory to determine the effectiveness of optimized actuator/sensor architectures and controller algorithms for active control of harmonic interior noise. Tests were conducted in a large scale fuselage model - a composite cylinder which simulates a commuter class aircraft fuselage with three sections of trim panel and a floor. Using an optimization technique based on the component transfer functions, combinations of 4 out of 8 piezoceramic actuators and 8 out of 462 microphone locations were evaluated against predicted performance. A combinatorial optimization technique called tabu search was employed to select the optimum transducer arrays. Three test frequencies represent the cases of a strong acoustic and strong structural response, a weak acoustic and strong structural response and a strong acoustic and weak structural response. Noise reduction was obtained using a Time Averaged/Gradient Descent (TAGD) controller. Results indicate that the optimization technique successfully predicted best and worst case performance. An enhancement of the TAGD control algorithm was also evaluated. The principal components of the actuator/sensor transfer functions were used in the PC-TAGD controller. The principal components are shown to be independent of each other while providing control as effective as the standard TAGD.

  5. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    PubMed Central

    Tang, Haijing; Wang, Siye; Zhang, Yanjun

    2013-01-01

    Clustering has become a common trend in very long instruction words (VLIW) architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC) VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC) VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file. PMID:23970841

  6. Power system modeling and optimization methods vis-a-vis integrated resource planning (IRP)

    NASA Astrophysics Data System (ADS)

    Arsali, Mohammad H.

    1998-12-01

    The state-of-the-art restructuring of power industries is changing the fundamental nature of retail electricity business. As a result, the so-called Integrated Resource Planning (IRP) strategies implemented on electric utilities are also undergoing modifications. Such modifications evolve from the imminent considerations to minimize the revenue requirements and maximize electrical system reliability vis-a-vis capacity-additions (viewed as potential investments). IRP modifications also provide service-design bases to meet the customer needs towards profitability. The purpose of this research as deliberated in this dissertation is to propose procedures for optimal IRP intended to expand generation facilities of a power system over a stretched period of time. Relevant topics addressed in this research towards IRP optimization are as follows: (1) Historical prospective and evolutionary aspects of power system production-costing models and optimization techniques; (2) A survey of major U.S. electric utilities adopting IRP under changing socioeconomic environment; (3) A new technique designated as the Segmentation Method for production-costing via IRP optimization; (4) Construction of a fuzzy relational database of a typical electric power utility system for IRP purposes; (5) A genetic algorithm based approach for IRP optimization using the fuzzy relational database.

  7. Modeling, simulation, and estimation of optical turbulence

    NASA Astrophysics Data System (ADS)

    Formwalt, Byron Paul

    This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.

  8. Determination of the optimal number of components in independent components analysis.

    PubMed

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  10. Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.

    Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less

  11. A pilot modeling technique for handling-qualities research

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  12. Optimal ancilla-free Pauli+V circuits for axial rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blass, Andreas; Bocharov, Alex; Gurevich, Yuri

    We address the problem of optimal representation of single-qubit rotations in a certain unitary basis consisting of the so-called V gates and Pauli matrices. The V matrices were proposed by Lubotsky, Philips, and Sarnak [Commun. Pure Appl. Math. 40, 401–420 (1987)] as a purely geometric construct in 1987 and recently found applications in quantum computation. They allow for exceptionally simple quantum circuit synthesis algorithms based on quaternionic factorization. We adapt the deterministic-search technique initially proposed by Ross and Selinger to synthesize approximating Pauli+V circuits of optimal depth for single-qubit axial rotations. Our synthesis procedure based on simple SL{sub 2}(ℤ) geometrymore » is almost elementary.« less

  13. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  14. Probabilistic Physics-Based Risk Tools Used to Analyze the International Space Station Electrical Power System Output

    NASA Technical Reports Server (NTRS)

    Patel, Bhogila M.; Hoge, Peter A.; Nagpal, Vinod K.; Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2004-01-01

    This paper describes the methods employed to apply probabilistic modeling techniques to the International Space Station (ISS) power system. These techniques were used to quantify the probabilistic variation in the power output, also called the response variable, due to variations (uncertainties) associated with knowledge of the influencing factors called the random variables. These uncertainties can be due to unknown environmental conditions, variation in the performance of electrical power system components or sensor tolerances. Uncertainties in these variables, cause corresponding variations in the power output, but the magnitude of that effect varies with the ISS operating conditions, e.g. whether or not the solar panels are actively tracking the sun. Therefore, it is important to quantify the influence of these uncertainties on the power output for optimizing the power available for experiments.

  15. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  16. A "Reverse-Schur" Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.

  17. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  18. AI techniques for a space application scheduling problem

    NASA Technical Reports Server (NTRS)

    Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.

    1991-01-01

    Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).

  19. A novel approach for dimension reduction of microarray.

    PubMed

    Aziz, Rabia; Verma, C K; Srivastava, Namita

    2017-12-01

    This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Fuel management optimization using genetic algorithms and code independence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1994-12-31

    Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less

  1. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  2. Computational techniques for design optimization of thermal protection systems for the space shuttle vehicle. Volume 1: Final report

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.

  3. A Standard Platform for Testing and Comparison of MDAO Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.

    2012-01-01

    The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.

  4. Interactive visual optimization and analysis for RFID benchmarking.

    PubMed

    Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C

    2009-01-01

    Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.

  5. Composite Particle Swarm Optimizer With Historical Memory for Function Optimization.

    PubMed

    Li, Jie; Zhang, JunQi; Jiang, ChangJun; Zhou, MengChu

    2015-10-01

    Particle swarm optimization (PSO) algorithm is a population-based stochastic optimization technique. It is characterized by the collaborative search in which each particle is attracted toward the global best position (gbest) in the swarm and its own best position (pbest). However, all of particles' historical promising pbests in PSO are lost except their current pbests. In order to solve this problem, this paper proposes a novel composite PSO algorithm, called historical memory-based PSO (HMPSO), which uses an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising pbests. Each particle has three candidate positions, which are generated from the historical memory, particles' current pbests, and the swarm's gbest. Then the best candidate position is adopted. Experiments on 28 CEC2013 benchmark functions demonstrate the superiority of HMPSO over other algorithms.

  6. Mitigating Handoff Call Dropping in Wireless Cellular Networks: A Call Admission Control Technique

    NASA Astrophysics Data System (ADS)

    Ekpenyong, Moses Effiong; Udoh, Victoria Idia; Bassey, Udoma James

    2016-06-01

    Handoff management has been an important but challenging issue in the field of wireless communication. It seeks to maintain seamless connectivity of mobile users changing their points of attachment from one base station to another. This paper derives a call admission control model and establishes an optimal step-size coefficient (k) that regulates the admission probability of handoff calls. An operational CDMA network carrier was investigated through the analysis of empirical data collected over a period of 1 month, to verify the performance of the network. Our findings revealed that approximately 23 % of calls in the existing system were lost, while 40 % of the calls (on the average) were successfully admitted. A simulation of the proposed model was then carried out under ideal network conditions to study the relationship between the various network parameters and validate our claim. Simulation results showed that increasing the step-size coefficient degrades the network performance. Even at optimum step-size (k), the network could still be compromised in the presence of severe network crises, but our model was able to recover from these problems and still functions normally.

  7. Optimization Technique With Sensitivity Analysis On Menu Scheduling For Boarding School Student Aged 13-18 Using “Sufahani-Ismail Algorithm”

    NASA Astrophysics Data System (ADS)

    Sudin, Azila M.; Sufahani, Suliadi

    2018-04-01

    Boarding school student aged 13-18 need to eat nutritious meals which contains proper calories, vitality and nutrients for appropriate development with a specific end goal to repair and upkeep the body tissues. Furthermore, it averts undesired diseases and contamination. Serving healthier food is a noteworthy stride towards accomplishing that goal. However, arranging a nutritious and balance menu manually is convoluted, wasteful and tedious. Therefore, the aim of this study is to develop a mathematical model with an optimization technique for menu scheduling that fulfill the whole supplement prerequisite for boarding school student, reduce processing time, minimize the budget and furthermore serve assortment type of food each day. It additionally gives the flexibility for the cook to choose any food to be considered in the beginning of the process and change any favored menu even after the ideal arrangement and optimal solution has been obtained. This is called sensitivity analysis. A recalculation procedure will be performed in light of the ideal arrangement and seven days menu was produced. The data was gathered from the Malaysian Ministry of Education and schools authorities. Menu arranging is a known optimization problem. Therefore Binary Programming alongside optimization technique and “Sufahani-Ismail Algorithm” were utilized to take care of this issue. In future, this model can be implemented to other menu problem, for example, for sports, endless disease patients, militaries, colleges, healing facilities and nursing homes.

  8. Efficient Kriging via Fast Matrix-Vector Products

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.

    2008-01-01

    Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.

  9. A Study of Adaptive Image Compression Techniques.

    DTIC Science & Technology

    1980-02-01

    1 3101 171 11 3 2 3 3751 3 053 - 17 314 3464 38 31124 3 440 3300 N. - 4 N.5 .V6 it 2018 3i13 1 00 1857 .743 13170 9 2 .531) 2232 2 I 2 14 79 2 236 1...CONTAINS THE OPTIMAL ALICCATION TO EACH C BLOCK C C ROUTINES CALLED C C RESALL RESOURCE ALLOCATION (USER BOUTINE ) C C C C SUBROUTINE OBITAL ( INDAT

  10. A single network adaptive critic (SNAC) architecture for optimal control synthesis for a class of nonlinear systems.

    PubMed

    Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N

    2006-12-01

    Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.

  11. Coarse analysis of collective behaviors: Bifurcation analysis of the optimal velocity model for traffic jam formation

    NASA Astrophysics Data System (ADS)

    Miura, Yasunari; Sugiyama, Yuki

    2017-12-01

    We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.

  12. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  13. Optimizing Requirements Decisions with KEYS

    NASA Technical Reports Server (NTRS)

    Jalali, Omid; Menzies, Tim; Feather, Martin

    2008-01-01

    Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.

  14. Computer-Aided Breast Cancer Diagnosis with Optimal Feature Sets: Reduction Rules and Optimization Techniques.

    PubMed

    Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo

    2017-01-01

    This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.

  15. Experiences at Langley Research Center in the application of optimization techniques to helicopter airframes for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta; Kvaternik, Raymond G.

    1991-01-01

    A NASA/industry rotorcraft structural dynamics program known as Design Analysis Methods for VIBrationS (DAMVIBS) was initiated at Langley Research Center in 1984 with the objective of establishing the technology base needed by the industry for developing an advanced finite-element-based vibrations design analysis capability for airframe structures. As a part of the in-house activities contributing to that program, a study was undertaken to investigate the use of formal, nonlinear programming-based, numerical optimization techniques for airframe vibrations design work. Considerable progress has been made in connection with that study since its inception in 1985. This paper presents a unified summary of the experiences and results of that study. The formulation and solution of airframe optimization problems are discussed. Particular attention is given to describing the implementation of a new computational procedure based on MSC/NASTRAN and CONstrained function MINimization (CONMIN) in a computer program system called DYNOPT for the optimization of airframes subject to strength, frequency, dynamic response, and fatigue constraints. The results from the application of the DYNOPT program to the Bell AH-1G helicopter are presented and discussed.

  16. Laser biostimulation therapy planning supported by imaging

    NASA Astrophysics Data System (ADS)

    Mester, Adam R.

    2018-04-01

    Ultrasonography and MR imaging can help to identify the area and depth of different lesions, like injury, overuse, inflammation, degenerative diseases. The appropriate power density, sufficient dose and direction of the laser treatment can be optimally estimated. If required minimum 5 mW photon density and required optimal energy dose: 2-4 Joule/cm2 wouldn't arrive into the depth of the target volume - additional techniques can help: slight compression of soft tissues can decrease the tissue thickness or multiple laser diodes can be used. In case of multiple diode clusters light scattering results deeper penetration. Another method to increase the penetration depth is a second pulsation (in kHz range) of laser light. (So called continuous wave laser itself has inherent THz pulsation by temporal coherence). Third solution of higher light intensity in the target volume is the multi-gate technique: from different angles the same joint can be reached based on imaging findings. Recent developments is ultrasonography: elastosonography and tissue harmonic imaging with contrast material offer optimal therapy planning. While MRI is too expensive modality for laser planning images can be optimally used if a diagnostic MRI already was done. Usual DICOM images offer "postprocessing" measurements in mm range.

  17. Multi Sensor Fusion Using Fitness Adaptive Differential Evolution

    NASA Astrophysics Data System (ADS)

    Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam

    The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).

  18. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    PubMed

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  19. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    PubMed Central

    Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250

  20. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  1. GOES-R SUVI EUV Flatfields Generated Using Boustrophedon Scans

    NASA Astrophysics Data System (ADS)

    Shing, L.; Edwards, C.; Mathur, D.; Vasudevan, G.; Shaw, M.; Nwachuku, C.

    2017-12-01

    The Solar Ultraviolet Imager (SUVI) is mounted on the Solar Pointing Platform (SPP) of the Geostationary Operational Environmental Satellite, GOES-R. SUVI is a Generalized Cassegrain telescope with a large field of view that employs multilayer coatings optimized to operate in six extreme ultraviolet (EUV) narrow bandpasses centered at 9.4, 13.1, 17.1, 19.5, 28.4 and 30.4 nm. The SUVI CCD flatfield response was determined using two different techniques; The Kuhn-Lin-Lorentz (KLL) Raster and a new technique called, Dynamic Boustrophedon Scans. The new technique requires less time to collect the data and is also less sensitive to Solar features compared with the KLL method. This paper presents the flatfield results of the SUVI using this technique during Post Launch Testing (PLT).

  2. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms

    PubMed Central

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.

    2015-01-01

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406

  3. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    PubMed

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  4. Magnetic resonance spectroscopic imaging for improved treatment planning of prostate cancer

    NASA Astrophysics Data System (ADS)

    Venugopal, Niranjan

    Prostate cancer is the most common malignancy afflicting Canadian men in 2011. Physicians use digital rectal exams (DRE), blood tests for prostate specific antigen (PSA) and transrectal ultrasound (TRUS)-guided biopsies for the initial diagnosis of prostate cancer. None of these tests detail the spatial extent of prostate cancer - information critical for using new therapies that can target cancerous prostate. With an MRI technique called proton magnetic resonance spectroscopic imaging (1H-MRSI), biochemical analysis of the entire prostate can be done without the need for biopsy, providing detailed information beyond the non-specific changes in hardness felt by an experienced urologist in a DRE, the presence of PSA in blood, or the "blind-guidance" of TRUS-guided biopsy. A hindrance to acquiring high quality 1H-MRSI data comes from signal originating from fatty tissue surrounding prostate that tends to mask or distort signal from within the prostate, thus reducing the overall clinical usefulness of 1H-MRSI data. This thesis has three major areas of focus: 1) The development of an optimized 1H-MRSI technique, called conformal voxel magnetic resonance spectroscopy (CV-MRS), to deal the with removal of unwanted lipid contaminating artifacts at short and long echo times. 2) An in vivo human study to test the CV-MRS technique, including healthy volunteers and cancer patients scheduled for radical prostatectomy or radiation therapy. 3) A study to determine the efficacy of using the 1H-MRSI data for optimized radiation treatment planning using modern delivery techniques like intensity modulated radiation treatment. Data collected from the study using the optimized CV-MRS method show significantly reduced lipid contamination resulting in high quality spectra throughout the prostate. Combining the CV-MRS technique with spectral-spatial excitation further reduced lipid contamination and opened up the possibility of detecting metabolites with short T2 relaxation times. Results from the in vivo study were verified with post-histopathological data. Lastly, 1H-MRSI data was incorporated into the radiation treatment planning software and used to assess tumour control by escalating the radiation to prostate lesions that were identified by 1H-MRSI. In summary, this thesis demonstrates the clinical feasibility of using advanced spectroscopic imaging techniques for improved diagnosis and treatment of prostate cancer.

  5. Application of dynamic programming to control khuzestan water resources system

    USGS Publications Warehouse

    Jamshidi, M.; Heidari, M.

    1977-01-01

    An approximate optimization technique based on discrete dynamic programming called discrete differential dynamic programming (DDDP), is employed to obtain the near optimal operation policies of a water resources system in the Khuzestan Province of Iran. The technique makes use of an initial nominal state trajectory for each state variable, and forms corridors around the trajectories. These corridors represent a set of subdomains of the entire feasible domain. Starting with such a set of nominal state trajectories, improvements in objective function are sought within the corridors formed around them. This leads to a set of new nominal trajectories upon which more improvements may be sought. Since optimization is confined to a set of subdomains, considerable savings in memory and computer time are achieved over that of conventional dynamic programming. The Kuzestan water resources system considered in this study is located in southwest Iran, and consists of two rivers, three reservoirs, three hydropower plants, and three irrigable areas. Data and cost benefit functions for the analysis were obtained either from the historical records or from similar studies. ?? 1977.

  6. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  7. Focusing on the golden ball metaheuristic: an extended study on a wider set of problems.

    PubMed

    Osaba, E; Diaz, F; Carballedo, R; Onieva, E; Perallos, A

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.

  8. Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems

    PubMed Central

    Osaba, E.; Diaz, F.; Carballedo, R.; Onieva, E.; Perallos, A.

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results. PMID:25165742

  9. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

  10. Parallel halftoning technique using dot diffusion optimization

    NASA Astrophysics Data System (ADS)

    Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara

    2017-05-01

    In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.

  11. On the utilization of engineering knowledge in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, P.

    1984-01-01

    Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.

  12. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  13. Development of a Digital Microarray with Interferometric Reflectance Imaging

    NASA Astrophysics Data System (ADS)

    Sevenler, Derin

    This dissertation describes a new type of molecular assay for nucleic acids and proteins. We call this technique a digital microarray since it is conceptually similar to conventional fluorescence microarrays, yet it performs enumerative ('digital') counting of the number captured molecules. Digital microarrays are approximately 10,000-fold more sensitive than fluorescence microarrays, yet maintain all of the strengths of the platform including low cost and high multiplexing (i.e., many different tests on the same sample simultaneously). Digital microarrays use gold nanorods to label the captured target molecules. Each gold nanorod on the array is individually detected based on its light scattering, with an interferometric microscopy technique called SP-IRIS. Our optimized high-throughput version of SP-IRIS is able to scan a typical array of 500 spots in less than 10 minutes. Digital DNA microarrays may have utility in applications where sequencing is prohibitively expensive or slow. As an example, we describe a digital microarray assay for gene expression markers of bacterial drug resistance.

  14. Space-filling designs for computer experiments: A review

    DOE PAGES

    Joseph, V. Roshan

    2016-01-29

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  15. Space-filling designs for computer experiments: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  16. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  17. Optimization of hybrid-type instrumentation for Pu accountancy of U/TRU ingot in pyroprocessing.

    PubMed

    Seo, Hee; Won, Byung-Hee; Ahn, Seong-Kyu; Lee, Seung Kyu; Park, Se-Hwan; Park, Geun-Il; Menlove, Spencer H

    2016-02-01

    One of the final products of pyroprocessing for spent nuclear fuel recycling is a U/TRU ingot consisting of rare earth (RE), uranium (U), and transuranic (TRU) elements. The amounts of nuclear materials in a U/TRU ingot must be measured as precisely as possible in order to secure the safeguardability of a pyroprocessing facility, as it contains the most amount of Pu among spent nuclear fuels. In this paper, we propose a new nuclear material accountancy method for measurement of Pu mass in a U/TRU ingot. This is a hybrid system combining two techniques, based on measurement of neutrons from both (1) fast- and (2) thermal-neutron-induced fission events. In technique #1, the change in the average neutron energy is a signature that is determined using the so-called ring ratio method, according to which two detector rings are positioned close to and far from the sample, respectively, to measure the increase of the average neutron energy due to the increased number of fast-neutron-induced fission events and, in turn, the Pu mass in the ingot. We call this technique, fast-neutron energy multiplication (FNEM). In technique #2, which is well known as Passive Neutron Albedo Reactivity (PNAR), a neutron population's changes resulting from thermal-neutron-induced fission events due to the presence or absence of a cadmium (Cd) liner in the sample's cavity wall, and reflected in the Cd ratio, is the signature that is measured. In the present study, it was considered that the use of a hybrid, FNEM×PNAR technique would significantly enhance the signature of a Pu mass. Therefore, the performance of such a system was investigated for different detector parameters in order to determine the optimal geometry. The performance was additionally evaluated by MCNP6 Monte Carlo simulations for different U/TRU compositions reflecting different burnups (BU), initial enrichments (IE), and cooling times (CT) to estimate its performance in real situations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Optimal Micropatterns in 2D Transport Networks and Their Relation to Image Inpainting

    NASA Astrophysics Data System (ADS)

    Brancolini, Alessio; Rossmanith, Carolin; Wirth, Benedikt

    2018-04-01

    We consider two different variational models of transport networks: the so-called branched transport problem and the urban planning problem. Based on a novel relation to Mumford-Shah image inpainting and techniques developed in that field, we show for a two-dimensional situation that both highly non-convex network optimization tasks can be transformed into a convex variational problem, which may be very useful from analytical and numerical perspectives. As applications of the convex formulation, we use it to perform numerical simulations (to our knowledge this is the first numerical treatment of urban planning), and we prove a lower bound for the network cost that matches a known upper bound (in terms of how the cost scales in the model parameters) which helps better understand optimal networks and their minimal costs.

  19. Generating compact classifier systems using a simple artificial immune system.

    PubMed

    Leung, Kevin; Cheong, France; Cheong, Christopher

    2007-10-01

    Current artificial immune system (AIS) classifiers have two major problems: 1) their populations of B-cells can grow to huge proportions, and 2) optimizing one B-cell (part of the classifier) at a time does not necessarily guarantee that the B-cell pool (the whole classifier) will be optimized. In this paper, the design of a new AIS algorithm and classifier system called simple AIS is described. It is different from traditional AIS classifiers in that it takes only one B-cell, instead of a B-cell pool, to represent the classifier. This approach ensures global optimization of the whole system, and in addition, no population control mechanism is needed. The classifier was tested on seven benchmark data sets using different classification techniques and was found to be very competitive when compared to other classifiers.

  20. Systematic Propulsion Optimization Tools (SPOT)

    NASA Technical Reports Server (NTRS)

    Bower, Mark; Celestian, John

    1992-01-01

    This paper describes a computer program written by senior-level Mechanical Engineering students at the University of Alabama in Huntsville which is capable of optimizing user-defined delivery systems for carrying payloads into orbit. The custom propulsion system is designed by the user through the input of configuration, payload, and orbital parameters. The primary advantages of the software, called Systematic Propulsion Optimization Tools (SPOT), are a user-friendly interface and a modular FORTRAN 77 code designed for ease of modification. The optimization of variables in an orbital delivery system is of critical concern in the propulsion environment. The mass of the overall system must be minimized within the maximum stress, force, and pressure constraints. SPOT utilizes the Design Optimization Tools (DOT) program for the optimization techniques. The SPOT program is divided into a main program and five modules: aerodynamic losses, orbital parameters, liquid engines, solid engines, and nozzles. The program is designed to be upgraded easily and expanded to meet specific user needs. A user's manual and a programmer's manual are currently being developed to facilitate implementation and modification.

  1. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  2. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  3. Dynamic positioning configuration and its first-order optimization

    NASA Astrophysics Data System (ADS)

    Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu

    2014-02-01

    Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.

  4. Computerized planning of prostate cryosurgery using variable cryoprobe insertion depth.

    PubMed

    Rossi, Michael R; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed

    2010-02-01

    The current study presents a computerized planning scheme for prostate cryosurgery using a variable insertion depth strategy. This study is a part of an ongoing effort to develop computerized tools for cryosurgery. Based on typical clinical practices, previous automated planning schemes have required that all cryoprobes be aligned at a single insertion depth. The current study investigates the benefit of removing this constraint, in comparison with results based on uniform insertion depth planning as well as the so-called "pullback procedure". Planning is based on the so-called "bubble-packing method", and its quality is evaluated with bioheat transfer simulations. This study is based on five 3D prostate models, reconstructed from ultrasound imaging, and cryoprobe active length in the range of 15-35 mm. The variable insertion depth technique is found to consistently provide superior results when compared to the other placement methods. Furthermore, it is shown that both the optimal active length and the optimal number of cryoprobes vary among prostate models, based on the size and shape of the target region. Due to its low computational cost, the new scheme can be used to determine the optimal cryoprobe layout for a given prostate model in real time. Copyright 2008 Elsevier Inc. All rights reserved.

  5. Maximizing the potential of direct aperture optimization through collimator rotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milette, Marie-Pierre; Otto, Karl; Medical Physics, BC Cancer Agency-Vancouver Centre, Vancouver, British Columbia

    Intensity-modulated radiation therapy (IMRT) treatment plans are conventionally produced by the optimization of fluence maps followed by a leaf sequencing step. An alternative to fluence based inverse planning is to optimize directly the leaf positions and field weights of multileaf collimator (MLC) apertures. This approach is typically referred to as direct aperture optimization (DAO). It has been shown that equivalent dose distributions may be generated that have substantially fewer monitor units (MU) and number of apertures compared to fluence based optimization techniques. Here we introduce a DAO technique with rotated apertures that we call rotating aperture optimization (RAO). The advantagesmore » of collimator rotation in IMRT have been shown previously and include higher fluence spatial resolution, increased flexibility in the generation of aperture shapes and less interleaf effects. We have tested our RAO algorithm on a complex C-shaped target, seven nasopharynx cancer recurrences, and one multitarget nasopharynx carcinoma patient. A study was performed in order to assess the capabilities of RAO as compared to fixed collimator angle DAO. The accuracy of fixed and rotated collimator aperture delivery was also verified. An analysis of the optimized treatment plans indicates that plans generated with RAO are as good as or better than DAO while maintaining a smaller number of apertures and MU than fluence based IMRT. Delivery verification results show that RAO is less sensitive to tongue and groove effects than DAO. Delivery time is currently increased due to the collimator rotation speed although this is a mechanical limitation that can be eliminated in the future.« less

  6. MR CAT scan: a modular approach for hybrid imaging.

    PubMed

    Hillenbrand, C; Hahn, D; Haase, A; Jakob, P M

    2000-07-01

    In this study, a modular concept for NMR hybrid imaging is presented. This concept essentially integrates different imaging modules in a sequential fashion and is therefore called CAT (combined acquisition technique). CAT is not a single specific measurement sequence, but rather a sequence design concept whereby distinct acquisition techniques with varying imaging parameters are employed in rapid succession in order to cover k-space. The power of the CAT approach is that it provides a high flexibility toward the acquisition optimization with respect to the available imaging time and the desired image quality. Important CAT sequence optimization steps include the appropriate choice of the k-space coverage ratio and the application of mixed bandwidth technology. Details of both the CAT methodology and possible CAT acquisition strategies, such as FLASH/EPI-, RARE/EPI- and FLASH/BURST-CAT are provided. Examples from imaging experiments in phantoms and healthy volunteers including mixed bandwidth acquisitions are provided to demonstrate the feasibility of the proposed CAT concept.

  7. White blood cell segmentation by circle detection using electromagnetism-like optimization.

    PubMed

    Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.

  8. White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization

    PubMed Central

    Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713

  9. CometBoards Users Manual Release 1.0

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Coroneos, Rula M.; Patnaik, Surya N.; Hopkins, Dale A.; Berke, Lazlo

    1996-01-01

    Several nonlinear mathematical programming algorithms for structural design applications are available at present. These include the sequence of unconstrained minimizations technique, the method of feasible directions, and the sequential quadratic programming technique. The optimality criteria technique and the fully utilized design concept are two other structural design methods. A project was undertaken to bring all these design methods under a common computer environment so that a designer can select any one of these tools that may be suitable for his/her application. To facilitate selection of a design algorithm, to validate and check out the computer code, and to ascertain the relative merits of the design tools, modest finite element structural analysis programs based on the concept of stiffness and integrated force methods have been coupled to each design method. The code that contains both these design and analysis tools, by reading input information from analysis and design data files, can cast the design of a structure as a minimum-weight optimization problem. The code can then solve it with a user-specified optimization technique and a user-specified analysis method. This design code is called CometBoards, which is an acronym for Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures. This manual describes for the user a step-by-step procedure for setting up the input data files and executing CometBoards to solve a structural design problem. The manual includes the organization of CometBoards; instructions for preparing input data files; the procedure for submitting a problem; illustrative examples; and several demonstration problems. A set of 29 structural design problems have been solved by using all the optimization methods available in CometBoards. A summary of the optimum results obtained for these problems is appended to this users manual. CometBoards, at present, is available for Posix-based Cray and Convex computers, Iris and Sun workstations, and the VM/CMS system.

  10. Step-by-Step Technique for Segmental Reconstruction of Reverse Hill-Sachs Lesions Using Homologous Osteochondral Allograft.

    PubMed

    Alkaduhimi, Hassanin; van den Bekerom, Michel P J; van Deurzen, Derek F P

    2017-06-01

    Posterior shoulder dislocations are accompanied by high forces and can result in an anteromedial humeral head impression fracture called a reverse Hill-Sachs lesion. This reverse Hill-Sachs lesion can result in serious complications including posttraumatic osteoarthritis, posterior dislocations, osteonecrosis, persistent joint stiffness, and loss of shoulder function. Treatment is challenging and depends on the amount of bone loss. Several techniques have been reported to describe the surgical treatment of lesions larger than 20%. However, there is still limited evidence with regard to the optimal procedure. Favorable results have been reported by performing segmental reconstruction of the reverse Hill-Sachs lesion with bone allograft. Although the procedure of segmental reconstruction has been used in several studies, its technique has not yet been well described in detail. In this report we propose a step-by-step description of the technique how to perform a segmental reconstruction of a reverse Hill-Sachs defect.

  11. Diffraction analysis of customized illumination technique

    NASA Astrophysics Data System (ADS)

    Lim, Chang-Moon; Kim, Seo-Min; Eom, Tae-Seung; Moon, Seung Chan; Shin, Ki S.

    2004-05-01

    Various enhancement techniques such as alternating PSM, chrome-less phase lithography, double exposure, etc. have been considered as driving forces to lead the production k1 factor towards below 0.35. Among them, a layer specific optimization of illumination mode, so-called customized illumination technique receives deep attentions from lithographers recently. A new approach for illumination customization based on diffraction spectrum analysis is suggested in this paper. Illumination pupil is divided into various diffraction domains by comparing the similarity of the confined diffraction spectrum. Singular imaging property of individual diffraction domain makes it easier to build and understand the customized illumination shape. By comparing the goodness of image in each domain, it was possible to achieve the customized shape of illumination. With the help from this technique, it was found that the layout change would not gives the change in the shape of customized illumination mode.

  12. IOTA: integration optimization, triage and analysis tool for the processing of XFEL diffraction images.

    PubMed

    Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T

    2016-06-01

    Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.

  13. Could Revision of the Embryology Influence Our Cesarean Delivery Technique: Towards an Optimized Cesarean Delivery for Universal Use

    PubMed Central

    Stark, Michael; Mynbaev, Ospan; Vassilevski, Yuri; Rozenberg, Patrick

    2016-01-01

    Until today, there is no standardized Cesarean Section method and many variations exist. The main variations concern the type of abdominal incision, usage of abdominal packs, suturing the uterus in one or two layers, and suturing the peritoneal layers or leaving them open. One of the questions is the optimal location of opening the uterus. Recently, omission of the bladder flap was recommended. The anatomy and histology as results from the embryological knowledge might help to solve this question. The working thesis is that the higher the incision is done, the more damage to muscle tissue can take place contrary to incision in the lower segment, where fibrous tissue prevails. In this perspective, a call for participation in a two-armed prospective study is included, which could result in an optimal, evidence-based Cesarean Section for universal use. PMID:28078171

  14. Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.

    PubMed

    Heydari, Ali; Balakrishnan, Sivasubramanya N

    2013-01-01

    To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.

  15. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  16. The DCU: the detector control unit for SPICA-SAFARI

    NASA Astrophysics Data System (ADS)

    Clénet, Antoine; Ravera, Laurent; Bertrand, Bernard; den Hartog, Roland H.; Jackson, Brian D.; van Leeuven, Bert-Joost; van Loon, Dennis; Parot, Yann; Pointecouteau, Etienne; Sournac, Anthony

    2014-08-01

    IRAP is developing the warm electronic, so called Detector Control Unit" (DCU), in charge of the readout of the SPICA-SAFARI's TES type detectors. The architecture of the electronics used to readout the 3 500 sensors of the 3 focal plane arrays is based on the frequency domain multiplexing technique (FDM). In each of the 24 detection channels the data of up to 160 pixels are multiplexed in frequency domain between 1 and 3:3 MHz. The DCU provides the AC signals to voltage-bias the detectors; it demodulates the detectors data which are readout in the cold by a SQUID; and it computes a feedback signal for the SQUID to linearize the detection chain in order to optimize its dynamic range. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several µs) and with fast signals (i.e. frequency carriers at 3:3 MHz). This digital signal processing is complex and has to be done at the same time for the 3 500 pixels. It thus requires an optimisation of the power consumption. We took the advantage of the relatively reduced science signal bandwidth (i.e. 20 - 40 Hz) to decouple the signal sampling frequency (10 MHz) and the data processing rate. Thanks to this method we managed to reduce the total number of operations per second and thus the power consumption of the digital processing circuit by a factor of 10. Moreover we used time multiplexing techniques to share the resources of the circuit (e.g. a single BBFB module processes 32 pixels). The current version of the firmware is under validation in a Xilinx Virtex 5 FPGA, the final version will be developed in a space qualified digital ASIC. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed the operation of the detection and readout chains requires to properly define more than 17 500 parameters (about 5 parameters per pixel). Thus it is mandatory to work out an automatic procedure to set up these optimal values. We defined a fast algorithm which characterizes the phase correction to be applied by the BBFB firmware and the pixel resonance frequencies. We also defined a technique to define the AC-carrier initial phases in such a way that the amplitude of their sum is minimized (for a better use of the DAC dynamic range).

  17. Static analysis of class invariants in Java programs

    NASA Astrophysics Data System (ADS)

    Bonilla-Quintero, Lidia Dionisia

    2011-12-01

    This paper presents a technique for the automatic inference of class invariants from Java bytecode. Class invariants are very important for both compiler optimization and as an aid to programmers in their efforts to reduce the number of software defects. We present the original DC-invariant analysis from Adam Webber, talk about its shortcomings and suggest several different ways to improve it. To apply the DC-invariant analysis to identify DC-invariant assertions, all that one needs is a monotonic method analysis function and a suitable assertion domain. The DC-invariant algorithm is very general; however, the method analysis can be highly tuned to the problem in hand. For example, one could choose shape analysis as the method analysis function and use the DC-invariant analysis to simply extend it to an analysis that would yield class-wide invariants describing the shapes of linked data structures. We have a prototype implementation: a system we refer to as "the analyzer" that infers DC-invariant unary and binary relations and provides them to the user in a human readable format. The analyzer uses those relations to identify unnecessary array bounds checks in Java programs and perform null-reference analysis. It uses Adam Webber's relational constraint technique for the class-invariant binary relations. Early results with the analyzer were very imprecise in the presence of "dirty-called" methods. A dirty-called method is one that is called, either directly or transitively, from any constructor of the class, or from any method of the class at a point at which a disciplined field has been altered. This result was unexpected and forced an extensive search for improved techniques. An important contribution of this paper is the suggestion of several ways to improve the results by changing the way dirty-called methods are handled. The new techniques expand the set of class invariants that can be inferred over Webber's original results. The technique that produces better results uses in-line analysis. Final results are promising: we can infer sound class invariants for full-scale, not just toy applications.

  18. Double emulsion solvent evaporation techniques used for drug encapsulation.

    PubMed

    Iqbal, Muhammad; Zafar, Nadiah; Fessi, Hatem; Elaissari, Abdelhamid

    2015-12-30

    Double emulsions are complex systems, also called "emulsions of emulsions", in which the droplets of the dispersed phase contain one or more types of smaller dispersed droplets themselves. Double emulsions have the potential for encapsulation of both hydrophobic as well as hydrophilic drugs, cosmetics, foods and other high value products. Techniques based on double emulsions are commonly used for the encapsulation of hydrophilic molecules, which suffer from low encapsulation efficiency because of rapid drug partitioning into the external aqueous phase when using single emulsions. The main issue when using double emulsions is their production in a well-controlled manner, with homogeneous droplet size by optimizing different process variables. In this review special attention has been paid to the application of double emulsion techniques for the encapsulation of various hydrophilic and hydrophobic anticancer drugs, anti-inflammatory drugs, antibiotic drugs, proteins and amino acids and their applications in theranostics. Moreover, the optimized ratio of the different phases and other process parameters of double emulsions are discussed. Finally, the results published regarding various types of solvents, stabilizers and polymers used for the encapsulation of several active substances via double emulsion processes are reported. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Hybrid switched time-optimal control of underactuated spacecraft

    NASA Astrophysics Data System (ADS)

    Olivares, Alberto; Staffetti, Ernesto

    2018-04-01

    This paper studies the time-optimal control problem for an underactuated rigid spacecraft equipped with both reaction wheels and gas jet thrusters that generate control torques about two of the principal axes of the spacecraft. Since a spacecraft equipped with two reaction wheels is not controllable, whereas a spacecraft equipped with two gas jet thrusters is controllable, this mixed actuation ensures controllability in the case in which one of the control axes is unactuated. A novel control logic is proposed for this hybrid actuation in which the reaction wheels are the main actuators and the gas jet thrusters act only after saturation or anticipating future saturation of the reaction wheels. The presence of both reaction wheels and gas jet thrusters gives rise to two operating modes for each actuated axis and therefore the spacecraft can be regarded as a switched dynamical system. The time-optimal control problem for this system is reformulated using the so-called embedding technique and the resulting problem is a classical optimal control problem. The main advantages of this technique are that integer or binary variables do not have to be introduced to model switching decisions between modes and that assumptions about the number of switches are not necessary. It is shown in this paper that this general method for the solution of optimal control problems for switched dynamical systems can efficiently deal with time-optimal control of an underactuated rigid spacecraft in which bound constraints on the torque of the actuators and on the angular momentum of the reaction wheels are taken into account.

  20. A novel method to accelerate orthodontic tooth movement

    PubMed Central

    Buyuk, S. Kutalmış; Yavuz, Mustafa C.; Genc, Esra; Sunar, Oguzhan

    2018-01-01

    This clinical case report presents fixed orthodontic treatment of a patient with moderately crowded teeth. It was performed with a new technique called ‘discision’. Discision method that was described for the first time by the present authors yielded predictable outcomes, and orthodontic treatment was completed in a short period of time. The total duration of orthodontic treatment was 4 months. Class I molar and canine relationships were established at the end of the treatment. Moreover, crowding in the mandible and maxilla was corrected, and optimal overjet and overbite were established. No scar tissue was observed in any gingival region on which discision was performed. The discision technique was developed as a minimally invasive alternative method to piezocision technique, and the authors suggest that this new method yields good outcomes in achieving rapid tooth movement. PMID:29436571

  1. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  2. An optimized resistor pattern for temperature gradient control in microfluidics

    NASA Astrophysics Data System (ADS)

    Selva, Bertrand; Marchalot, Julien; Jullien, Marie-Caroline

    2009-06-01

    In this paper, we demonstrate the possibility of generating high-temperature gradients with a linear temperature profile when heating is provided in situ. Thanks to improved optimization algorithms, the shape of resistors, which constitute the heating source, is optimized by applying the genetic algorithm NSGA-II (acronym for the non-dominated sorting genetic algorithm) (Deb et al 2002 IEEE Trans. Evol. Comput. 6 2). Experimental validation of the linear temperature profile within the cavity is carried out using a thermally sensitive fluorophore, called Rhodamine B (Ross et al 2001 Anal. Chem. 73 4117-23, Erickson et al 2003 Lab Chip 3 141-9). The high level of agreement obtained between experimental and numerical results serves to validate the accuracy of this method for generating highly controlled temperature profiles. In the field of actuation, such a device is of potential interest since it allows for controlling bubbles or droplets moving by means of thermocapillary effects (Baroud et al 2007 Phys. Rev. E 75 046302). Digital microfluidics is a critical area in the field of microfluidics (Dreyfus et al 2003 Phys. Rev. Lett. 90 14) as well as in the so-called lab-on-a-chip technology. Through an example, the large application potential of such a technique is demonstrated, which entails handling a single bubble driven along a cavity using simple and tunable embedded resistors.

  3. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  4. Transtracheal oxygen and positive airway pressure: A salvage technique in overlap syndrome.

    PubMed

    Biscardi, Frank Hugo; Rubio, Edmundo Raul

    2014-01-01

    The coexistence of sleep apnea-hypopnea syndrome (SAHS) with chronic obstructive pulmonary disease (COPD) occurs commonly. This so called overlap syndrome leads to more profound hypoxemia, hypercapnic respiratory failure, and pulmonary hypertension than each of these conditions independently. Not infrequently, these patients show profound hypoxemia, despite optimal continuous positive airway pressure (CPAP) therapy for their SAHS. We report a case where CPAP therapy with additional in-line oxygen supplementation failed to accomplish adequate oxygenation. Adding transtracheal oxygen therapy (TTOT) to CPAP therapy provided better results. We review the literature on transtracheal oxygen therapy and how this technique may play a significant role in these complicated patients with overlap syndrome, obviating the need for more invasive procedures, such as tracheostomy.

  5. Early driver fatigue detection from electroencephalography signals using artificial neural networks.

    PubMed

    King, L M; Nguyen, H T; Lal, S K L

    2006-01-01

    This paper describes a driver fatigue detection system using an artificial neural network (ANN). Using electroencephalogram (EEG) data sampled from 20 professional truck drivers and 35 non professional drivers, the time domain data are processed into alpha, beta, delta and theta bands and then presented to the neural network to detect the onset of driver fatigue. The neural network uses a training optimization technique called the magnified gradient function (MGF). This technique reduces the time required for training by modifying the standard back propagation (SBP) algorithm. The MGF is shown to classify professional driver fatigue with 81.49% accuracy (80.53% sensitivity, 82.44% specificity) and non-professional driver fatigue with 83.06% accuracy (84.04% sensitivity and 82.08% specificity).

  6. Optimal service using Matlab - simulink controlled Queuing system at call centers

    NASA Astrophysics Data System (ADS)

    Balaji, N.; Siva, E. P.; Chandrasekaran, A. D.; Tamilazhagan, V.

    2018-04-01

    This paper presents graphical integrated model based academic research on telephone call centres. This paper introduces an important feature of impatient customers and abandonments in the queue system. However the modern call centre is a complex socio-technical system. Queuing theory has now become a suitable application in the telecom industry to provide better online services. Through this Matlab-simulink multi queuing structured models provide better solutions in complex situations at call centres. Service performance measures analyzed at optimal level through Simulink queuing model.

  7. Inventions on baker's yeast storage and activation at the bakery plant.

    PubMed

    Gélinas, Pierre

    2010-01-01

    Baker's yeast is the gas-forming ingredient in bakery products. Methods have been invented to properly handle baker's yeast and optimize its activity at the bakery plant. Over the years, incentives for inventions on yeast storage and activation have greatly changed depending on trends in the baking industry. For example, retailer's devices for cutting bulk pressed yeast and techniques for activating dry yeast have now lost their importance. Review of patents for invention indicates that activation of baker's yeast activity has been a very important issue for bakers, for example, with baking ingredients called yeast foods. In the recent years and especially for highly automated bakeries, interest has moved to equipments and processes for optimized storage of liquid cream yeast to thoroughly control dough fermentation and bread quality.

  8. Optimal design of composite hip implants using NASA technology

    NASA Technical Reports Server (NTRS)

    Blake, T. A.; Saravanos, D. A.; Davy, D. T.; Waters, S. A.; Hopkins, D. A.

    1993-01-01

    Using an adaptation of NASA software, we have investigated the use of numerical optimization techniques for the shape and material optimization of fiber composite hip implants. The original NASA inhouse codes, were originally developed for the optimization of aerospace structures. The adapted code, which was called OPORIM, couples numerical optimization algorithms with finite element analysis and composite laminate theory to perform design optimization using both shape and material design variables. The external and internal geometry of the implant and the surrounding bone is described with quintic spline curves. This geometric representation is then used to create an equivalent 2-D finite element model of the structure. Using laminate theory and the 3-D geometric information, equivalent stiffnesses are generated for each element of the 2-D finite element model, so that the 3-D stiffness of the structure can be approximated. The geometric information to construct the model of the femur was obtained from a CT scan. A variety of test cases were examined, incorporating several implant constructions and design variable sets. Typically the code was able to produce optimized shape and/or material parameters which substantially reduced stress concentrations in the bone adjacent of the implant. The results indicate that this technology can provide meaningful insight into the design of fiber composite hip implants.

  9. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  10. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  11. Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale

    1997-01-01

    The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.

  12. Multidisciplinary Optimization Approach for Design and Operation of Constrained and Complex-shaped Space Systems

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young

    The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue, for the ADCS operations, controllers based on Model Predictive Control that are effective for constraint handling were developed and implemented. All the suggested design and operation methodologies are applied to a mission "CADRE", which is space weather mission scheduled for operation in 2016. This application demonstrates the usefulness and capability of the methodology to enhance CADRE's capabilities, and its ability to be applied to a variety of missions.

  13. A novel clinical decision support system using improved adaptive genetic algorithm for the assessment of fetal well-being.

    PubMed

    Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.

  14. Safety Guided Design of Crew Return Vehicle in Concept Design Phase Using STAMP/STPA

    NASA Astrophysics Data System (ADS)

    Nakao, H.; Katahira, M.; Miyamoto, Y.; Leveson, N.

    2012-01-01

    In the concept development and design phase of a new space system, such as a Crew Vehicle, designers tend to focus on how to implement new technology. Designers also consider the difficulty of using the new technology and trade off several system design candidates. Then they choose an optimal design from the candidates. Safety should be a key aspect driving optimal concept design. However, in past concept design activities, safety analysis such as FTA has not used to drive the design because such analysis techniques focus on component failure and component failure cannot be considered in the concept design phase. The solution to these problems is to apply a new hazard analysis technique, called STAMP/STPA. STAMP/STPA defines safety as a control problem rather than a failure problem and identifies hazardous scenarios and their causes. Defining control flow is the essential in concept design phase. Therefore STAMP/STPA could be a useful tool to assess the safety of system candidates and to be part of the rationale for choosing a design as the baseline of the system. In this paper, we explain our case study of safety guided concept design using STPA, the new hazard analysis technique, and model-based specification technique on Crew Return Vehicle design and evaluate benefits of using STAMP/STPA in concept development phase.

  15. Optimization of the Divergent method for genotyping single nucleotide variations using SYBR Green-based single-tube real-time PCR.

    PubMed

    Gentilini, Fabio; Turba, Maria E

    2014-01-01

    A novel technique, called Divergent, for single-tube real-time PCR genotyping of point mutations without the use of fluorescently labeled probes has recently been reported. This novel PCR technique utilizes a set of four primers and a particular denaturation temperature for simultaneously amplifying two different amplicons which extend in opposite directions from the point mutation. The two amplicons can readily be detected using the melt curve analysis downstream to a closed-tube real-time PCR. In the present study, some critical aspects of the original method were specifically addressed to further implement the technique for genotyping the DNM1 c.G767T mutation responsible for exercise-induced collapse in Labrador retriever dogs. The improved Divergent assay was easily set up using a standard two-step real-time PCR protocol. The melting temperature difference between the mutated and the wild-type amplicons was approximately 5°C which could be promptly detected by all the thermal cyclers. The upgraded assay yielded accurate results with 157pg of genomic DNA per reaction. This optimized technique represents a flexible and inexpensive alternative to the minor grove binder fluorescently labeled method and to high resolution melt analysis for high-throughput, robust and cheap genotyping of single nucleotide variations. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  17. Next-generation acceleration and code optimization for light transport in turbid media using GPUs

    PubMed Central

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-01-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498

  18. The aggregated unfitted finite element method for elliptic problems

    NASA Astrophysics Data System (ADS)

    Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.

    2018-07-01

    Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.

  19. Stabilization of Particle Discrimination Efficiencies for Neutron Spectrum Unfolding With Organic Scintillators

    NASA Astrophysics Data System (ADS)

    Lawrence, Chris C.; Polack, J. K.; Febbraro, Michael; Kolata, J. J.; Flaska, Marek; Pozzi, S. A.; Becchetti, F. D.

    2017-02-01

    The literature discussing pulse-shape discrimination (PSD) in organic scintillators dates back several decades. However, little has been written about PSD techniques that are optimized for neutron spectrum unfolding. Variation in n-γ misclassification rates and in γ/n ratio of incident fields can distort the neutron pulse-height response of scintillators and these distortions can in turn cause large errors in unfolded spectra. New applications in arms-control verification call for detection of lower-energy neutrons, for which PSD is particularly problematic. In this article, we propose techniques for removing distortions on pulse-height response that result from the merging of PSD distributions in the low-pulse-height region. These techniques take advantage of the repeatable shapes of PSD distributions that are governed by the counting statistics of scintillation-photon populations. We validate the proposed techniques using accelerator-based time-of-flight measurements and then demonstrate them by unfolding the Watt spectrum from measurement with a 252Cf neutron source.

  20. Computer Based Porosity Design by Multi Phase Topology Optimization

    NASA Astrophysics Data System (ADS)

    Burblies, Andreas; Busse, Matthias

    2008-02-01

    A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.

  1. CORSS: Cylinder Optimization of Rings, Skin, and Stringers

    NASA Technical Reports Server (NTRS)

    Finckenor, J.; Rogers, P.; Otte, N.

    1994-01-01

    Launch vehicle designs typically make extensive use of cylindrical skin stringer construction. Structural analysis methods are well developed for preliminary design of this type of construction. This report describes an automated, iterative method to obtain a minimum weight preliminary design. Structural optimization has been researched extensively, and various programs have been written for this purpose. Their complexity and ease of use depends on their generality, the failure modes considered, the methodology used, and the rigor of the analysis performed. This computer program employs closed-form solutions from a variety of well-known structural analysis references and joins them with a commercially available numerical optimizer called the 'Design Optimization Tool' (DOT). Any ring and stringer stiffened shell structure of isotropic materials that has beam type loading can be analyzed. Plasticity effects are not included. It performs a more limited analysis than programs such as PANDA, but it provides an easy and useful preliminary design tool for a large class of structures. This report briefly describes the optimization theory, outlines the development and use of the program, and describes the analysis techniques that are used. Examples of program input and output, as well as the listing of the analysis routines, are included.

  2. IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1994-01-01

    IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.

  3. Differential prioritization between relevance and redundancy in correlation-based feature selection techniques for multiclass gene expression data.

    PubMed

    Ooi, Chia Huey; Chetty, Madhu; Teng, Shyh Wei

    2006-06-23

    Due to the large number of genes in a typical microarray dataset, feature selection looks set to play an important role in reducing noise and computational cost in gene expression-based tissue classification while improving accuracy at the same time. Surprisingly, this does not appear to be the case for all multiclass microarray datasets. The reason is that many feature selection techniques applied on microarray datasets are either rank-based and hence do not take into account correlations between genes, or are wrapper-based, which require high computational cost, and often yield difficult-to-reproduce results. In studies where correlations between genes are considered, attempts to establish the merit of the proposed techniques are hampered by evaluation procedures which are less than meticulous, resulting in overly optimistic estimates of accuracy. We present two realistically evaluated correlation-based feature selection techniques which incorporate, in addition to the two existing criteria involved in forming a predictor set (relevance and redundancy), a third criterion called the degree of differential prioritization (DDP). DDP functions as a parameter to strike the balance between relevance and redundancy, providing our techniques with the novel ability to differentially prioritize the optimization of relevance against redundancy (and vice versa). This ability proves useful in producing optimal classification accuracy while using reasonably small predictor set sizes for nine well-known multiclass microarray datasets. For multiclass microarray datasets, especially the GCM and NCI60 datasets, DDP enables our filter-based techniques to produce accuracies better than those reported in previous studies which employed similarly realistic evaluation procedures.

  4. FusionArc optimization: a hybrid volumetric modulated arc therapy (VMAT) and intensity modulated radiation therapy (IMRT) planning strategy.

    PubMed

    Matuszak, Martha M; Steers, Jennifer M; Long, Troy; McShan, Daniel L; Fraass, Benedick A; Romeijn, H Edwin; Ten Haken, Randall K

    2013-07-01

    To introduce a hybrid volumetric modulated arc therapy/intensity modulated radiation therapy (VMAT/IMRT) optimization strategy called FusionArc that combines the delivery efficiency of single-arc VMAT with the potentially desirable intensity modulation possible with IMRT. A beamlet-based inverse planning system was enhanced to combine the advantages of VMAT and IMRT into one comprehensive technique. In the hybrid strategy, baseline single-arc VMAT plans are optimized and then the current cost function gradients with respect to the beamlets are used to define a metric for predicting which beam angles would benefit from further intensity modulation. Beams with the highest metric values (called the gradient factor) are converted from VMAT apertures to IMRT fluence, and the optimization proceeds with the mixed variable set until convergence or until additional beams are selected for conversion. One phantom and two clinical cases were used to validate the gradient factor and characterize the FusionArc strategy. Comparisons were made between standard IMRT, single-arc VMAT, and FusionArc plans with one to five IMRT∕hybrid beams. The gradient factor was found to be highly predictive of the VMAT angles that would benefit plan quality the most from beam modulation. Over the three cases studied, a FusionArc plan with three converted beams achieved superior dosimetric quality with reductions in final cost ranging from 26.4% to 48.1% compared to single-arc VMAT. Additionally, the three beam FusionArc plans required 22.4%-43.7% fewer MU∕Gy than a seven beam IMRT plan. While the FusionArc plans with five converted beams offer larger reductions in final cost--32.9%-55.2% compared to single-arc VMAT--the decrease in MU∕Gy compared to IMRT was noticeably smaller at 12.2%-18.5%, when compared to IMRT. A hybrid VMAT∕IMRT strategy was implemented to find a high quality compromise between gantry-angle and intensity-based degrees of freedom. This optimization method will allow patients to be simultaneously planned for dosimetric quality and delivery efficiency without switching between delivery techniques. Example phantom and clinical cases suggest that the conversion of only three VMAT segments to modulated beams may result in a good combination of quality and efficiency.

  5. Experimental Investigation and Optimization of TIG Welding Parameters on Aluminum 6061 Alloy Using Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, Rishi; Mevada, N. Ramesh; Rathore, Santosh; Agarwal, Nitin; Rajput, Vinod; Sinh Barad, AjayPal

    2017-08-01

    To improve Welding quality of aluminum (Al) plate, the TIG Welding system has been prepared, by which Welding current, Shielding gas flow rate and Current polarity can be controlled during Welding process. In the present work, an attempt has been made to study the effect of Welding current, current polarity, and shielding gas flow rate on the tensile strength of the weld joint. Based on the number of parameters and their levels, the Response Surface Methodology technique has been selected as the Design of Experiment. For understanding the influence of input parameters on Ultimate tensile strength of weldment, ANOVA analysis has been carried out. Also to describe and optimize TIG Welding using a new metaheuristic Nature - inspired algorithm which is called as Firefly algorithm which was developed by Dr. Xin-She Yang at Cambridge University in 2007. A general formulation of firefly algorithm is presented together with an analytical, mathematical modeling to optimize the TIG Welding process by a single equivalent objective function.

  6. An inequality for detecting financial fraud, derived from the Markowitz Optimal Portfolio Theory

    NASA Astrophysics Data System (ADS)

    Bard, Gregory V.

    2016-12-01

    The Markowitz Optimal Portfolio Theory, published in 1952, is well-known, and was often taught because it blends Lagrange Multipliers, matrices, statistics, and mathematical finance. However, the theory faded from prominence in American investing, as Business departments at US universities shifted from techniques based on mathematics, finance, and statistics, to focus instead on leadership, public speaking, interpersonal skills, advertising, etc… The author proposes a new application of Markowitz's Theory: the detection of a fairly broad category of financial fraud (called "Ponzi schemes" in American newspapers) by looking at a particular inequality derived from the Markowitz Optimal Portfolio Theory, relating volatility and expected rate of return. For example, one recent Ponzi scheme was that of Bernard Madoff, uncovered in December 2008, which comprised fraud totaling 64,800,000,000 US dollars [23]. The objective is to compare investments with the "efficient frontier" as predicted by Markowitz's theory. Violations of the inequality should be impossible in theory; therefore, in practice, violations might indicate fraud.

  7. User's manual for the BNW-I optimization code for dry-cooled power plants. [AMCIRC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Daniel, D.J.; De Mier, W.V.

    1977-01-01

    This appendix provides a listing, called Program AMCIRC, of the BNW-1 optimization code for determining, for a particular size power plant, the optimum dry cooling tower design using ammonia flow in the heat exchanger tubes. The optimum design is determined by repeating the design of the cooling system over a range of design conditions in order to find the cooling system with the smallest incremental cost. This is accomplished by varying five parameters of the plant and cooling system over ranges of values. These parameters are varied systematically according to techniques that perform pattern and gradient searches. The dry coolingmore » system optimized by program AMCIRC is composed of a condenser/reboiler (condensation of steam and boiling of ammonia), piping system (transports ammonia vapor out and ammonia liquid from the dry cooling towers), and circular tower system (vertical one-pass heat exchangers situated in circular configurations with cocurrent ammonia flow in the tubes of the heat exchanger). (LCL)« less

  8. Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm

    PubMed Central

    Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed

    2008-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581

  9. Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.

    PubMed

    Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed

    2004-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.

  10. Time-Varying Delay Estimation Applied to the Surface Electromyography Signals Using the Parametric Approach

    NASA Astrophysics Data System (ADS)

    Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier

    Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.

  11. Selection of actuator locations for static shape control of large space structures by heuristic integer programing

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Adelman, H. M.

    1984-01-01

    Orbiting spacecraft such as large space antennas have to maintain a highly accurate space to operate satisfactorily. Such structures require active and passive controls to mantain an accurate shape under a variety of disturbances. Methods for the optimum placement of control actuators for correcting static deformations are described. In particular, attention is focused on the case were control locations have to be selected from a large set of available sites, so that integer programing methods are called for. The effectiveness of three heuristic techniques for obtaining a near-optimal site selection is compared. In addition, efficient reanalysis techniques for the rapid assessment of control effectiveness are presented. Two examples are used to demonstrate the methods: a simple beam structure and a 55m space-truss-parabolic antenna.

  12. Digital correlation detector for low-cost Omega navigation

    NASA Technical Reports Server (NTRS)

    Chamberlin, K. A.

    1976-01-01

    Techniques to lower the cost of using the Omega global navigation network with phase-locked loops (PLL) were developed. The technique that was accepted as being "optimal" is called the memory-aided phase-locked loop (MAPLL) since it allows operation on all eight Omega time slots with one PLL through the implementation of a random access memory. The receiver front-end and the signals that it transmits to the PLL were first described. A brief statistical analysis of these signals was then made to allow a rough comparison between the front-end presented in this work and a commercially available front-end to be made. The hardware and theory of application of the MAPLL were described, ending with an analysis of data taken with the MAPLL. Some conclusions and recommendations were also given.

  13. Live-cell imaging of budding yeast telomerase RNA and TERRA.

    PubMed

    Laprade, Hadrien; Lalonde, Maxime; Guérit, David; Chartrand, Pascal

    2017-02-01

    In most eukaryotes, the ribonucleoprotein complex telomerase is responsible for maintaining telomere length. In recent years, single-cell microscopy techniques such as fluorescent in situ hybridization and live-cell imaging have been developed to image the RNA subunit of the telomerase holoenzyme. These techniques are now becoming important tools for the study of telomerase biogenesis, its association with telomeres and its regulation. Here, we present detailed protocols for live-cell imaging of the Saccharomyces cerevisiae telomerase RNA subunit, called TLC1, and also of the non-coding telomeric repeat-containing RNA TERRA. We describe the approach used for genomic integration of MS2 stem-loops in these transcripts, and provide information for optimal live-cell imaging of these non-coding RNAs. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.

    PubMed

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2015-12-01

    Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.

  15. Predicting Flood in Perlis Using Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Nadia Sabri, Syaidatul; Saian, Rizauddin

    2017-06-01

    Flood forecasting is widely being studied in order to reduce the effect of flood such as loss of property, loss of life and contamination of water supply. Usually flood occurs due to continuous heavy rainfall. This study used a variant of Ant Colony Optimization (ACO) algorithm named the Ant-Miner to develop the classification prediction model to predict flood. However, since Ant-Miner only accept discrete data, while rainfall data is a time series data, a pre-processing steps is needed to discretize the rainfall data initially. This study used a technique called the Symbolic Aggregate Approximation (SAX) to convert the rainfall time series data into discrete data. As an addition, Simple K-Means algorithm was used to cluster the data produced by SAX. The findings show that the predictive accuracy of the classification prediction model is more than 80%.

  16. Direct aperture optimization: a turnkey solution for step-and-shoot IMRT.

    PubMed

    Shepard, D M; Earl, M A; Li, X A; Naqvi, S; Yu, C

    2002-06-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach "direct aperture optimization." This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.

  17. Inverse-optimized 3D conformal planning: Minimizing complexity while achieving equivalence with beamlet IMRT in multiple clinical sites

    PubMed Central

    Fraass, Benedick A.; Steers, Jennifer M.; Matuszak, Martha M.; McShan, Daniel L.

    2012-01-01

    Purpose: Inverse planned intensity modulated radiation therapy (IMRT) has helped many centers implement highly conformal treatment planning with beamlet-based techniques. The many comparisons between IMRT and 3D conformal (3DCRT) plans, however, have been limited because most 3DCRT plans are forward-planned while IMRT plans utilize inverse planning, meaning both optimization and delivery techniques are different. This work avoids that problem by comparing 3D plans generated with a unique inverse planning method for 3DCRT called inverse-optimized 3D (IO-3D) conformal planning. Since IO-3D and the beamlet IMRT to which it is compared use the same optimization techniques, cost functions, and plan evaluation tools, direct comparisons between IMRT and simple, optimized IO-3D plans are possible. Though IO-3D has some similarity to direct aperture optimization (DAO), since it directly optimizes the apertures used, IO-3D is specifically designed for 3DCRT fields (i.e., 1–2 apertures per beam) rather than starting with IMRT-like modulation and then optimizing aperture shapes. The two algorithms are very different in design, implementation, and use. The goals of this work include using IO-3D to evaluate how close simple but optimized IO-3D plans come to nonconstrained beamlet IMRT, showing that optimization, rather than modulation, may be the most important aspect of IMRT (for some sites). Methods: The IO-3D dose calculation and optimization functionality is integrated in the in-house 3D planning/optimization system. New features include random point dose calculation distributions, costlet and cost function capabilities, fast dose volume histogram (DVH) and plan evaluation tools, optimization search strategies designed for IO-3D, and an improved, reimplemented edge/octree calculation algorithm. The IO-3D optimization, in distinction to DAO, is designed to optimize 3D conformal plans (one to two segments per beam) and optimizes MLC segment shapes and weights with various user-controllable search strategies which optimize plans without beamlet or pencil beam approximations. IO-3D allows comparisons of beamlet, multisegment, and conformal plans optimized using the same cost functions, dose points, and plan evaluation metrics, so quantitative comparisons are straightforward. Here, comparisons of IO-3D and beamlet IMRT techniques are presented for breast, brain, liver, and lung plans. Results: IO-3D achieves high quality results comparable to beamlet IMRT, for many situations. Though the IO-3D plans have many fewer degrees of freedom for the optimization, this work finds that IO-3D plans with only one to two segments per beam are dosimetrically equivalent (or nearly so) to the beamlet IMRT plans, for several sites. IO-3D also reduces plan complexity significantly. Here, monitor units per fraction (MU/Fx) for IO-3D plans were 22%–68% less than that for the 1 cm × 1 cm beamlet IMRT plans and 72%–84% than the 0.5 cm × 0.5 cm beamlet IMRT plans. Conclusions: The unique IO-3D algorithm illustrates that inverse planning can achieve high quality 3D conformal plans equivalent (or nearly so) to unconstrained beamlet IMRT plans, for many sites. IO-3D thus provides the potential to optimize flat or few-segment 3DCRT plans, creating less complex optimized plans which are efficient and simple to deliver. The less complex IO-3D plans have operational advantages for scenarios including adaptive replanning, cases with interfraction and intrafraction motion, and pediatric patients. PMID:22755717

  18. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  19. Support vector machine multiuser receiver for DS-CDMA signals in multipath channels.

    PubMed

    Chen, S; Samingan, A K; Hanzo, L

    2001-01-01

    The problem of constructing an adaptive multiuser detector (MUD) is considered for direct sequence code division multiple access (DS-CDMA) signals transmitted through multipath channels. The emerging learning technique, called support vector machines (SVM), is proposed as a method of obtaining a nonlinear MUD from a relatively small training data block. Computer simulation is used to study this SVM MUD, and the results show that it can closely match the performance of the optimal Bayesian one-shot detector. Comparisons with an adaptive radial basis function (RBF) MUD trained by an unsupervised clustering algorithm are discussed.

  20. Hierarchical Poly Tree Configurations for the Solution of Dynamically Refined Finte Element Models

    NASA Technical Reports Server (NTRS)

    Gute, G. D.; Padovan, J.

    1993-01-01

    This paper demonstrates how a multilevel substructuring technique, called the Hierarchical Poly Tree (HPT), can be used to integrate a localized mesh refinement into the original finite element model more efficiently. The optimal HPT configurations for solving isoparametrically square h-, p-, and hp-extensions on single and multiprocessor computers is derived. In addition, the reduced number of stiffness matrix elements that must be stored when employing this type of solution strategy is quantified. Moreover, the HPT inherently provides localize 'error-trapping' and a logical, efficient means with which to isolate physically anomalous and analytically singular behavior.

  1. Processing the Bouguer anomaly map of Biga and the surrounding area by the cellular neural network: application to the southwestern Marmara region

    NASA Astrophysics Data System (ADS)

    Aydogan, D.

    2007-04-01

    An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more powerful than the classical methods.

  2. An inexpensive active optical remote sensing instrument for assessing aerosol distributions.

    PubMed

    Barnes, John E; Sharma, Nimmi C P

    2012-02-01

    Air quality studies on a broad variety of topics from health impacts to source/sink analyses, require information on the distributions of atmospheric aerosols over both altitude and time. An inexpensive, simple to implement, ground-based optical remote sensing technique has been developed to assess aerosol distributions. The technique, called CLidar (Charge Coupled Device Camera Light Detection and Ranging), provides aerosol altitude profiles over time. In the CLidar technique a relatively low-power laser transmits light vertically into the atmosphere. The transmitted laser light scatters off of air molecules, clouds, and aerosols. The entire beam from ground to zenith is imaged using a CCD camera and wide-angle (100 degree) optics which are a few hundred meters from the laser. The CLidar technique is optimized for low altitude (boundary layer and lower troposphere) measurements where most aerosols are found and where many other profiling techniques face difficulties. Currently the technique is limited to nighttime measurements. Using the CLidar technique aerosols may be mapped over both altitude and time. The instrumentation required is portable and can easily be moved to locations of interest (e.g. downwind from factories or power plants, near highways). This paper describes the CLidar technique, implementation and data analysis and offers specifics for users wishing to apply the technique for aerosol profiles.

  3. Dynamic Multiple-Threshold Call Admission Control Based on Optimized Genetic Algorithm in Wireless/Mobile Networks

    NASA Astrophysics Data System (ADS)

    Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin

    Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.

  4. OTG-snpcaller: An Optimized Pipeline Based on TMAP and GATK for SNP Calling from Ion Torrent Data

    PubMed Central

    Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y. Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun

    2014-01-01

    Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology’s Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences. PMID:24824529

  5. OTG-snpcaller: an optimized pipeline based on TMAP and GATK for SNP calling from ion torrent data.

    PubMed

    Zhu, Pengyuan; He, Lingyu; Li, Yaqiao; Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun

    2014-01-01

    Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology's Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences.

  6. Optimization Based Data Mining Approah for Forecasting Real-Time Energy Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omitaomu, Olufemi A; Li, Xueping; Zhou, Shengchao

    The worldwide concern over environmental degradation, increasing pressure on electric utility companies to meet peak energy demand, and the requirement to avoid purchasing power from the real-time energy market are motivating the utility companies to explore new approaches for forecasting energy demand. Until now, most approaches for forecasting energy demand rely on monthly electrical consumption data. The emergence of smart meters data is changing the data space for electric utility companies, and creating opportunities for utility companies to collect and analyze energy consumption data at a much finer temporal resolution of at least 15-minutes interval. While the data granularity providedmore » by smart meters is important, there are still other challenges in forecasting energy demand; these challenges include lack of information about appliances usage and occupants behavior. Consequently, in this paper, we develop an optimization based data mining approach for forecasting real-time energy demand using smart meters data. The objective of our approach is to develop a robust estimation of energy demand without access to these other building and behavior data. Specifically, the forecasting problem is formulated as a quadratic programming problem and solved using the so-called support vector machine (SVM) technique in an online setting. The parameters of the SVM technique are optimized using simulated annealing approach. The proposed approach is applied to hourly smart meters data for several residential customers over several days.« less

  7. JIGSAW: Joint Inhomogeneity estimation via Global Segment Assembly for Water-fat separation.

    PubMed

    Lu, Wenmiao; Lu, Yi

    2011-07-01

    Water-fat separation in magnetic resonance imaging (MRI) is of great clinical importance, and the key to uniform water-fat separation lies in field map estimation. This work deals with three-point field map estimation, in which water and fat are modelled as two single-peak spectral lines, and field inhomogeneities shift the spectrum by an unknown amount. Due to the simplified spectrum modelling, there exists inherent ambiguity in forming field maps from multiple locally feasible field map values at each pixel. To resolve such ambiguity, spatial smoothness of field maps has been incorporated as a constraint of an optimization problem. However, there are two issues: the optimization problem is computationally intractable and even when it is solved exactly, it does not always separate water and fat images. Hence, robust field map estimation remains challenging in many clinically important imaging scenarios. This paper proposes a novel field map estimation technique called JIGSAW. It extends a loopy belief propagation (BP) algorithm to obtain an approximate solution to the optimization problem. The solution produces locally smooth segments and avoids error propagation associated with greedy methods. The locally smooth segments are then assembled into a globally consistent field map by exploiting the periodicity of the feasible field map values. In vivo results demonstrate that JIGSAW outperforms existing techniques and produces correct water-fat separation in challenging imaging scenarios.

  8. Space Reclamation for Uncoordinated Checkpointing in Message-Passing Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min

    1993-01-01

    Checkpointing and rollback recovery are techniques that can provide efficient recovery from transient process failures. In a message-passing system, the rollback of a message sender may cause the rollback of the corresponding receiver, and the system needs to roll back to a consistent set of checkpoints called recovery line. If the processes are allowed to take uncoordinated checkpoints, the above rollback propagation may result in the domino effect which prevents recovery line progression. Traditionally, only obsolete checkpoints before the global recovery line can be discarded, and the necessary and sufficient condition for identifying all garbage checkpoints has remained an open problem. A necessary and sufficient condition for achieving optimal garbage collection is derived and it is proved that the number of useful checkpoints is bounded by N(N+1)/2, where N is the number of processes. The approach is based on the maximum-sized antichain model of consistent global checkpoints and the technique of recovery line transformation and decomposition. It is also shown that, for systems requiring message logging to record in-transit messages, the same approach can be used to achieve optimal message log reclamation. As a final topic, a unifying framework is described by considering checkpoint coordination and exploiting piecewise determinism as mechanisms for bounding rollback propagation, and the applicability of the optimal garbage collection algorithm to domino-free recovery protocols is demonstrated.

  9. Challenges of CAC in Heterogeneous Wireless Cognitive Networks

    NASA Astrophysics Data System (ADS)

    Wang, Jiazheng; Fu, Xiuhua

    Call admission control (CAC) is known as an effective functionality in ensuring the QoS of wireless networks. The vision of next generation wireless networks has led to the development of new call admission control (CAC) algorithms specifically designed for heterogeneous wireless Cognitive networks. However, there will be a number of challenges created by dynamic spectrum access and scheduling techniques associated with the cognitive systems. In this paper for the first time, we recommend that the CAC policies should be distinguished between primary users and secondary users. The classification of different methods of cac policies in cognitive networks contexts is proposed. Although there have been some researches within the umbrella of Joint CAC and cross-layer optimization for wireless networks, the advent of the cognitive networks adds some additional problems. We present the conceptual models for joint CAC and cross-layer optimization respectively. Also, the benefit of Cognition can only be realized fully if application requirements and traffic flow contexts are determined or inferred in order to know what modes of operation and spectrum bands to use at each point in time. The process model of Cognition involved per-flow-based CAC is presented. Because there may be a number of parameters on different levels affecting a CAC decision and the conditions for accepting or rejecting a call must be computed quickly and frequently, simplicity and practicability are particularly important for designing a feasible CAC algorithm. In a word, a more thorough understanding of CAC in heterogeneous wireless cognitive networks may help one to design better CAC algorithms.

  10. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  12. SLC: The End Game

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raimondi, Pantaleo

    The design of the Stanford Linear Collider (SLC) called for a beam intensity far beyond what was practically achievable. This was due to intrinsic limitations in many subsystems and to a lack of understanding of the new physics of linear colliders. Real progress in improving the SLC performance came from precision, non-invasive diagnostics to measure and monitor the beams and from new techniques to control the emittance dilution and optimize the beams. A major contribution to the success of the last 1997-98 SLC run came from several innovative ideas for improving the performance of the Final Focus (FF). This papermore » describes some of the problems encountered and techniques used to overcome them. Building on the SLC experience, we will also present a new approach to the FF design for future high energy linear colliders.« less

  13. Three Dimensional Reconstruction Workflows for Lost Cultural Heritage Monuments Exploiting Public Domain and Professional Photogrammetric Imagery

    NASA Astrophysics Data System (ADS)

    Wahbeh, W.; Nebiker, S.

    2017-08-01

    In our paper, we document experiments and results of image-based 3d reconstructions of famous heritage monuments which were recently damaged or completely destroyed by the so-called Islamic state in Syria and Iraq. The specific focus of our research is on the combined use of professional photogrammetric imagery and of publicly available imagery from the web for optimally 3d reconstructing those monuments. The investigated photogrammetric reconstruction techniques include automated bundle adjustment and dense multi-view 3d reconstruction using public domain and professional imagery on the one hand and an interactive polygonal modelling based on projected panoramas on the other. Our investigations show that the combination of these two image-based modelling techniques delivers better results in terms of model completeness, level of detail and appearance.

  14. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  15. Design Optimization of Systems Governed by Partial Differential Equations. Phase 1

    DTIC Science & Technology

    1989-03-01

    DIFFERENTIAL EQUATIONS" SUBMITTED TO: AIR FORCE OFFICE OF SCIENTIFIC RESEARCH AFOSR/NM ATTN: Major James Crowley BUILDING 410, ROOM 209 BOLLING AFB, DC 20332...of his algorithms called DELIGHT. We consider this work to be of signal importance for the future of all engineer- ing design optimization. Prof...to be set up in a subroutine, which would be called by the optimization code. We then intended to pursue a slow and orderly progression of the problem

  16. Innovative model-based flow rate optimization for vanadium redox flow batteries

    NASA Astrophysics Data System (ADS)

    König, S.; Suriyah, M. R.; Leibfried, T.

    2016-11-01

    In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.

  17. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  18. Parallel Evolutionary Optimization for Neuromorphic Network Training

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D; Disney, Adam; Singh, Susheela

    One of the key impediments to the success of current neuromorphic computing architectures is the issue of how best to program them. Evolutionary optimization (EO) is one promising programming technique; in particular, its wide applicability makes it especially attractive for neuromorphic architectures, which can have many different characteristics. In this paper, we explore different facets of EO on a spiking neuromorphic computing model called DANNA. We focus on the performance of EO in the design of our DANNA simulator, and on how to structure EO on both multicore and massively parallel computing systems. We evaluate how our parallel methods impactmore » the performance of EO on Titan, the U.S.'s largest open science supercomputer, and BOB, a Beowulf-style cluster of Raspberry Pi's. We also focus on how to improve the EO by evaluating commonality in higher performing neural networks, and present the result of a study that evaluates the EO performed by Titan.« less

  19. Method for Constructing Composite Response Surfaces by Combining Neural Networks with other Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2003-01-01

    A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.

  20. Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation [Invited].

    PubMed

    Yoon, Ki-Hyuk; Kang, Min-Koo; Lee, Hwasun; Kim, Sung-Kyu

    2018-01-01

    We study optical technologies for viewer-tracked autostereoscopic 3D display (VTA3D), which provides improved 3D image quality and extended viewing range. In particular, we utilize a technique-the so-called dynamic fusion of viewing zone (DFVZ)-for each 3D optical line to realize image quality equivalent to that achievable at optimal viewing distance, even when a viewer is moving in a depth direction. In addition, we examine quantitative properties of viewing zones provided by the VTA3D system that adopted DFVZ, revealing that the optimal viewing zone can be formed at viewer position. Last, we show that the comfort zone is extended due to DFVZ. This is demonstrated by a viewer's subjective evaluation of the 3D display system that employs both multiview autostereoscopic 3D display and DFVZ.

  1. Detection and Length Estimation of Linear Scratch on Solid Surfaces Using an Angle Constrained Ant Colony Technique

    NASA Astrophysics Data System (ADS)

    Pal, Siddharth; Basak, Aniruddha; Das, Swagatam

    In many manufacturing areas the detection of surface defects is one of the most important processes in quality control. Currently in order to detect small scratches on solid surfaces most of the industries working on material manufacturing rely on visual inspection primarily. In this article we propose a hybrid computational intelligence technique to automatically detect a linear scratch from a solid surface and estimate its length (in pixel unit) simultaneously. The approach is based on a swarm intelligence algorithm called Ant Colony Optimization (ACO) and image preprocessing with Wiener and Sobel filters as well as the Canny edge detector. The ACO algorithm is mostly used to compensate for the broken parts of the scratch. Our experimental results confirm that the proposed technique can be used for detecting scratches from noisy and degraded images, even when it is very difficult for conventional image processing to distinguish the scratch area from its background.

  2. Learning directed acyclic graphs from large-scale genomics data.

    PubMed

    Nikolay, Fabio; Pesavento, Marius; Kritikos, George; Typas, Nassos

    2017-09-20

    In this paper, we consider the problem of learning the genetic interaction map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double-knockout (DK) data. Based on a set of well-established biological interaction models, we detect and classify the interactions between genes. We propose a novel linear integer optimization program called the Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies among genes and to compute the DAG topology that matches the DK measurements best. Furthermore, we extend the GENIE program by incorporating genetic interaction profile (GI-profile) data to further enhance the detection performance. In addition, we propose a sequential scalability technique for large sets of genes under study, in order to provide statistically significant results for real measurement data. Finally, we show via numeric simulations that the GENIE program and the GI-profile data extended GENIE (GI-GENIE) program clearly outperform the conventional techniques and present real data results for our proposed sequential scalability technique.

  3. Efficient Ada multitasking on a RISC register window architecture

    NASA Technical Reports Server (NTRS)

    Kearns, J. P.; Quammen, D.

    1987-01-01

    This work addresses the problem of reducing context switch overhead on a processor which supports a large register file - a register file much like that which is part of the Berkeley RISC processors and several other emerging architectures (which are not necessarily reduced instruction set machines in the purest sense). Such a reduction in overhead is particularly desirable in a real-time embedded application, in which task-to-task context switch overhead may result in failure to meet crucial deadlines. A storage management technique by which a context switch may be implemented as cheaply as a procedure call is presented. The essence of this technique is the avoidance of the save/restore of registers on the context switch. This is achieved through analysis of the static source text of an Ada tasking program. Information gained during that analysis directs the optimized storage management strategy for that program at run time. A formal verification of the technique in terms of an operational control model and an evaluation of the technique's performance via simulations driven by synthetic Ada program traces are presented.

  4. Optimisation multi-objectif des systemes energetiques

    NASA Astrophysics Data System (ADS)

    Dipama, Jean

    The increasing demand of energy and the environmental concerns related to greenhouse gas emissions lead to more and more private or public utilities to turn to nuclear energy as an alternative for the future. Nuclear power plants are then called to experience large expansion in the coming years. Improved technologies will then be put in place to support the development of these plants. This thesis considers the optimization of the thermodynamic cycle of the secondary loop of Gentilly-2 nuclear power plant in terms of output power and thermal efficiency. In this thesis, investigations are carried out to determine the optimal operating conditions of steam power cycles by the judicious use of the combination of steam extraction at the different stages of the turbines. Whether it is the case of superheating or regeneration, we are confronted in all cases to an optimization problem involving two conflicting objectives, as increasing the efficiency imply the decrease of mechanical work and vice versa. Solving this kind of problem does not lead to unique solution, but to a set of solutions that are tradeoffs between the conflicting objectives. To search all of these solutions, called Pareto optimal solutions, the use of an appropriate optimization algorithm is required. Before starting the optimization of the secondary loop, we developed a thermodynamic model of the secondary loop which includes models for the main thermal components (e.g., turbine, moisture separator-superheater, condenser, feedwater heater and deaerator). This model is used to calculate the thermodynamic state of the steam and water at the different points of the installation. The thermodynamic model has been developed with Matlab and validated by comparing its predictions with the operating data provided by the engineers of the power plant. The optimizer developed in VBA (Visual Basic for Applications) uses an optimization algorithm based on the principle of genetic algorithms, a stochastic optimization method which is very robust and widely used to solve problems usually difficult to handle by traditional methods. Genetic algorithms (GAs) have been used in previous research and proved to be efficient in optimizing heat exchangers networks (HEN) (Dipama et al., 2008). So, HEN have been synthesized to recover the maximum heat in an industrial process. The optimization problem formulated in the context of this work consists of a single objective, namely the maximization of energy recovery. The optimization algorithm developed in this thesis extends the ability of GAs by taking into account several objectives simultaneously. This algorithm provides an innovation in the method of finding optimal solutions, by using a technique which consist of partitioning the solutions space in the form of parallel grids called "watching corridors". These corridors permit to specify areas (the observation corridors) in which the most promising feasible solutions are found and used to guide the search towards optimal solutions. A measure of the progress of the search is incorporated into the optimization algorithm to make it self-adaptive through the use of appropriate genetic operators at each stage of optimization process. The proposed method allows a fast convergence and ensure a diversity of solutions. Moreover, this method gives the algorithm the ability to overcome difficulties associated with optimizing problems with complex Pareto front landscapes (e.g., discontinuity, disjunction, etc.). The multi-objective optimization algorithm has been first validated using numerical test problems found in the literature as well as energy systems optimization problems. Finally, the proposed optimization algorithm has been applied for the optimization of the secondary loop of Gentilly-2 nuclear power plant, and a set of solutions have been found which permit to make the power plant operate in optimal conditions. (Abstract shortened by UMI.)

  5. Microwave-based medical diagnosis using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Modiri, Arezoo

    This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.

  6. Optimal Control and Smoothing Techniques for Computing Minimum Fuel Orbital Transfers and Rendezvous

    NASA Astrophysics Data System (ADS)

    Epenoy, R.; Bertrand, R.

    We investigate in this paper the computation of minimum fuel orbital transfers and rendezvous. Each problem is seen as an optimal control problem and is solved by means of shooting methods [1]. This approach corresponds to the use of Pontryagin's Maximum Principle (PMP) [2-4] and leads to the solution of a Two Point Boundary Value Problem (TPBVP). It is well known that this last one is very difficult to solve when the performance index is fuel consumption because in this case the optimal control law has a particular discontinuous structure called "bang-bang". We will show how to modify the performance index by a term depending on a small parameter in order to yield regular controls. Then, a continuation method on this parameter will lead us to the solution of the original problem. Convergence theorems will be given. Finally, numerical examples will illustrate the interest of our method. We will consider two particular problems: The GTO (Geostationary Transfer Orbit) to GEO (Geostationary Equatorial Orbit) transfer and the LEO (Low Earth Orbit) rendezvous.

  7. Numerical Solution of the Electron Heat Transport Equation and Physics-Constrained Modeling of the Thermal Conductivity via Sequential Quadratic Programming Optimization in Nuclear Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Paloma, Cynthia S.

    The plasma electron temperature (Te) plays a critical role in a tokamak nu- clear fusion reactor since temperatures on the order of 108K are required to achieve fusion conditions. Many plasma properties in a tokamak nuclear fusion reactor are modeled by partial differential equations (PDE's) because they depend not only on time but also on space. In particular, the dynamics of the electron temperature is governed by a PDE referred to as the Electron Heat Transport Equation (EHTE). In this work, a numerical method is developed to solve the EHTE based on a custom finite-difference technique. The solution of the EHTE is compared to temperature profiles obtained by using TRANSP, a sophisticated plasma transport code, for specific discharges from the DIII-D tokamak, located at the DIII-D National Fusion Facility in San Diego, CA. The thermal conductivity (also called thermal diffusivity) of the electrons (Xe) is a plasma parameter that plays a critical role in the EHTE since it indicates how the electron temperature diffusion varies across the minor effective radius of the tokamak. TRANSP approximates Xe through a curve-fitting technique to match experimentally measured electron temperature profiles. While complex physics-based model have been proposed for Xe, there is a lack of a simple mathematical model for the thermal diffusivity that could be used for control design. In this work, a model for Xe is proposed based on a scaling law involving key plasma variables such as the electron temperature (Te), the electron density (ne), and the safety factor (q). An optimization algorithm is developed based on the Sequential Quadratic Programming (SQP) technique to optimize the scaling factors appearing in the proposed model so that the predicted electron temperature and magnetic flux profiles match predefined target profiles in the best possible way. A simulation study summarizing the outcomes of the optimization procedure is presented to illustrate the potential of the proposed modeling method.

  8. A Signal-Detection Analysis of Fast-and-Frugal Trees

    ERIC Educational Resources Information Center

    Luan, Shenghua; Schooler, Lael J.; Gigerenzer, Gerd

    2011-01-01

    Models of decision making are distinguished by those that aim for an optimal solution in a world that is precisely specified by a set of assumptions (a so-called "small world") and those that aim for a simple but satisfactory solution in an uncertain world where the assumptions of optimization models may not be met (a so-called "large world"). Few…

  9. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  10. Flight Mechanics Project

    NASA Technical Reports Server (NTRS)

    Steck, Daniel

    2009-01-01

    This report documents the generation of an outbound Earth to Moon transfer preliminary database consisting of four cases calculated twice a day for a 19 year period. The database was desired as the first step in order for NASA to rapidly generate Earth to Moon trajectories for the Constellation Program using the Mission Assessment Post Processor. The completed database was created running a flight trajectory and optimization program, called Copernicus, in batch mode with the use of newly created Matlab functions. The database is accurate and has high data resolution. The techniques and scripts developed to generate the trajectory information will also be directly used in generating a comprehensive database.

  11. Damage Detection Using Holography and Interferometry

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2003-01-01

    This paper reviews classical approaches to damage detection using laser holography and interferometry. The paper then details the modern uses of electronic holography and neural-net-processed characteristic patterns to detect structural damage. The design of the neural networks and the preparation of the training sets are discussed. The use of a technique to optimize the training sets, called folding, is explained. Then a training procedure is detailed that uses the holography-measured vibration modes of the undamaged structures to impart damage-detection sensitivity to the neural networks. The inspections of an optical strain gauge mounting plate and an International Space Station cold plate are presented as examples.

  12. On the Optimization of Aerospace Plane Ascent Trajectory

    NASA Astrophysics Data System (ADS)

    Al-Garni, Ahmed; Kassem, Ayman Hamdy

    A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.

  13. Visual analytics of anomaly detection in large data streams

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.; Sharma, Ratnesh K.; Mehta, Abhay

    2009-01-01

    Most data streams usually are multi-dimensional, high-speed, and contain massive volumes of continuous information. They are seen in daily applications, such as telephone calls, retail sales, data center performance, and oil production operations. Many analysts want insight into the behavior of this data. They want to catch the exceptions in flight to reveal the causes of the anomalies and to take immediate action. To guide the user in finding the anomalies in the large data stream quickly, we derive a new automated neighborhood threshold marking technique, called AnomalyMarker. This technique is built on cell-based data streams and user-defined thresholds. We extend the scope of the data points around the threshold to include the surrounding areas. The idea is to define a focus area (marked area) which enables users to (1) visually group the interesting data points related to the anomalies (i.e., problems that occur persistently or occasionally) for observing their behavior; (2) discover the factors related to the anomaly by visualizing the correlations between the problem attribute with the attributes of the nearby data items from the entire multi-dimensional data stream. Mining results are quickly presented in graphical representations (i.e., tooltip) for the user to zoom into the problem regions. Different algorithms are introduced which try to optimize the size and extent of the anomaly markers. We have successfully applied this technique to detect data stream anomalies in large real-world enterprise server performance and data center energy management.

  14. An introduction to the COLIN optimization interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William Eugene

    2003-03-01

    We describe COLIN, a Common Optimization Library INterface for C++. COLIN provides C++ template classes that define a generic interface for both optimization problems and optimization solvers. COLIN is specifically designed to facilitate the development of hybrid optimizers, for which one optimizer calls another to solve an optimization subproblem. We illustrate the capabilities of COLIN with an example of a memetic genetic programming solver.

  15. In Search of Search Engine Marketing Strategy Amongst SME's in Ireland

    NASA Astrophysics Data System (ADS)

    Barry, Chris; Charleton, Debbie

    Researchers have identified the Web as a searchers first port of call for locating information. Search Engine Marketing (SEM) strategies have been noted as a key consideration when developing, maintaining and managing Websites. A study presented here of SEM practices of Irish small to medium enterprises (SMEs) reveals they plan to spend more resources on SEM in the future. Most firms utilize an informal SEM strategy, where Website optimization is perceived most effective in attracting traffic. Respondents cite the use of ‘keywords in title and description tags’ as the most used SEM technique, followed by the use of ‘keywords throughout the whole Website’; while ‘Pay for Placement’ was most widely used Paid Search technique. In concurrence with the literature, measuring SEM performance remains a significant challenge with many firms unsure if they measure it effectively. An encouraging finding is that Irish SMEs adopt a positive ethical posture when undertaking SEM.

  16. Review of Research into the Concept of the Microblowing Technique for Turbulent Skin Friction Reduction

    NASA Technical Reports Server (NTRS)

    2004-01-01

    A new technology for reducing turbulent skin friction, called the Microblowing Technique (MBT), is presented. Results from proof-of-concept experiments show that this technology could potentially reduce turbulent skin friction by more than 50% of the skin friction of a solid flat plate for subsonic and supersonic flow conditions. The primary purpose of this review paper is to provide readers with information on the turbulent skin friction reduction obtained from many experiments using the MBT. Although the MBT has a penalty for obtaining the microblowing air associated with it, some combinations of the MBT with suction boundary layer control methods are an attractive alternative for a real application. Several computational simulations to understand the flow physics of the MBT are also included. More experiments and computational fluid dynamics (CFD) computations are needed for the understanding of the unsteady flow nature of the MBT and the optimization of this new technology.

  17. Time-frequency and advanced frequency estimation techniques for the investigation of bat echolocation calls.

    PubMed

    Kopsinis, Yannis; Aboutanios, Elias; Waters, Dean A; McLaughlin, Steve

    2010-02-01

    In this paper, techniques for time-frequency analysis and investigation of bat echolocation calls are studied. Particularly, enhanced resolution techniques are developed and/or used in this specific context for the first time. When compared to traditional time-frequency representation methods, the proposed techniques are more capable of showing previously unseen features in the structure of bat echolocation calls. It should be emphasized that although the study is focused on bat echolocation recordings, the results are more general and applicable to many other types of signal.

  18. Tuning support vector machines for minimax and Neyman-Pearson classification.

    PubMed

    Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D

    2010-10-01

    This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.

  19. Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.

    PubMed

    Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal

    2013-11-01

    In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Evaluation of hybrid inverse planning and optimization (HIPO) algorithm for optimization in real-time, high-dose-rate (HDR) brachytherapy for prostate.

    PubMed

    Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley

    2013-07-08

    The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.

  1. A probabilistic multi-criteria decision making technique for conceptual and preliminary aerospace systems design

    NASA Astrophysics Data System (ADS)

    Bandte, Oliver

    It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.

  2. How unrealistic optimism is maintained in the face of reality.

    PubMed

    Sharot, Tali; Korn, Christoph W; Dolan, Raymond J

    2011-10-09

    Unrealistic optimism is a pervasive human trait that influences domains ranging from personal relationships to politics and finance. How people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs, is unknown. We examined this question and found a marked asymmetry in belief updating. Participants updated their beliefs more in response to information that was better than expected than to information that was worse. This selectivity was mediated by a relative failure to code for errors that should reduce optimism. Distinct regions of the prefrontal cortex tracked estimation errors when those called for positive update, both in individuals who scored high and low on trait optimism. However, highly optimistic individuals exhibited reduced tracking of estimation errors that called for negative update in right inferior prefrontal gyrus. These findings indicate that optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future.

  3. Dynamic optimization of open-loop input signals for ramp-up current profiles in tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Ren, Zhigang; Xu, Chao; Lin, Qun; Loxton, Ryan; Teo, Kok Lay

    2016-03-01

    Establishing a good current spatial profile in tokamak fusion reactors is crucial to effective steady-state operation. The evolution of the current spatial profile is related to the evolution of the poloidal magnetic flux, which can be modeled in the normalized cylindrical coordinates using a parabolic partial differential equation (PDE) called the magnetic diffusion equation. In this paper, we consider the dynamic optimization problem of attaining the best possible current spatial profile during the ramp-up phase of the tokamak. We first use the Galerkin method to obtain a finite-dimensional ordinary differential equation (ODE) model based on the original magnetic diffusion PDE. Then, we combine the control parameterization method with a novel time-scaling transformation to obtain an approximate optimal parameter selection problem, which can be solved using gradient-based optimization techniques such as sequential quadratic programming (SQP). This control parameterization approach involves approximating the tokamak input signals by piecewise-linear functions whose slopes and break-points are decision variables to be optimized. We show that the gradient of the objective function with respect to the decision variables can be computed by solving an auxiliary dynamic system governing the state sensitivity matrix. Finally, we conclude the paper with simulation results for an example problem based on experimental data from the DIII-D tokamak in San Diego, California.

  4. Materials Development in Three Italian CALL Projects: Seeking an Optimal Mix between In-Class and Out-of-Class Learning

    ERIC Educational Resources Information Center

    Levy, Mike; Kennedy, Claire

    2010-01-01

    This paper considers the design and development of CALL materials with the aim of achieving an optimal mix between in-class and out-of-class learning in the context of teaching Italian at an Australian university. The authors discuss three projects in relation to the following themes: (a) conceptions of the in-class/out-of-class relationship, (b)…

  5. Evolutionary optimality applied to Drosophila experiments: hypothesis of constrained reproductive efficiency.

    PubMed

    Novoseltsev, V N; Arking, R; Novoseltseva, J A; Yashin, A I

    2002-06-01

    The general purpose of the paper is to test evolutionary optimality theories with experimental data on reproduction, energy consumption, and longevity in a particular Drosophila genotype. We describe the resource allocation in Drosophila females in terms of the oxygen consumption rates devoted to reproduction and to maintenance. The maximum ratio of the component spent on reproduction to the total rate of oxygen consumption, which can be realized by the female reproductive machinery, is called metabolic reproductive efficiency (MRE). We regard MRE as an evolutionary constraint. We demonstrate that MRE may be evaluated for a particular Drosophila phenotype given the fecundity pattern, the age-related pattern of oxygen consumption rate, and the longevity. We use a homeostatic model of aging to simulate a life history of a representative female fly, which describes the control strain in the long-term experiments with the Wayne State Drosophila genotype. We evaluate the theoretically optimal trade-offs in this genotype. Then we apply the Van Noordwijk-de Jong resource acquisition and allocation model, Kirkwood's disposable soma theory. and the Partridge-Barton optimality approach to test if the experimentally observed trade-offs may be regarded as close to the theoretically optimal ones. We demonstrate that the two approaches by Partridge-Barton and Kirkwood allow a positive answer to the question, whereas the Van Noordwijk-de Jong approach may be used to illustrate the optimality. We discuss the prospects of applying the proposed technique to various Drosophila experiments, in particular those including manipulations affecting fecundity.

  6. StreamSqueeze: a dynamic stream visualization for monitoring of event data

    NASA Astrophysics Data System (ADS)

    Mansmann, Florian; Krstajic, Milos; Fischer, Fabian; Bertini, Enrico

    2012-01-01

    While in clear-cut situations automated analytical solution for data streams are already in place, only few visual approaches have been proposed in the literature for exploratory analysis tasks on dynamic information. However, due to the competitive or security-related advantages that real-time information gives in domains such as finance, business or networking, we are convinced that there is a need for exploratory visualization tools for data streams. Under the conditions that new events have higher relevance and that smooth transitions enable traceability of items, we propose a novel dynamic stream visualization called StreamSqueeze. In this technique the degree of interest of recent items is expressed through an increase in size and thus recent events can be shown with more details. The technique has two main benefits: First, the layout algorithm arranges items in several lists of various sizes and optimizes the positions within each list so that the transition of an item from one list to the other triggers least visual changes. Second, the animation scheme ensures that for 50 percent of the time an item has a static screen position where reading is most effective and then continuously shrinks and moves to the its next static position in the subsequent list. To demonstrate the capability of our technique, we apply it to large and high-frequency news and syslog streams and show how it maintains optimal stability of the layout under the conditions given above.

  7. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  8. Defect Engineering in SrI 2:Eu 2+ Single Crystal Scintillators

    DOE PAGES

    Wu, Yuntao; Boatner, Lynn A.; Lindsey, Adam C.; ...

    2015-06-23

    Eu 2+-activated strontium iodide is an excellent single crystal scintillator used for gamma-ray detection and significant effort is currently focused on the development of large-scale crystal growth techniques. A new approach of molten-salt pumping or so-called melt aging was recently applied to optimize the crystal quality and scintillation performance. Nevertheless, a detailed understanding of the underlying mechanism of this technique is still lacking. The main purpose of this paper is to conduct an in-depth study of the interplay between microstructure, trap centers and scintillation efficiency after melt aging treatment. Three SrI 2:2 mol% Eu2+ single crystals with 16 mm diametermore » were grown using the Bridgman method under identical growth conditions with the exception of the melt aging time (e.g. 0, 24 and 72 hours). Using energy-dispersive X-ray spectroscopy, it is found that the matrix composition of the finished crystal after melt aging treatment approaches the stoichiometric composition. The mechanism responsible for the formation of secondary phase inclusions in melt-aged SrI 2:Eu 2+ is discussed. Simultaneous improvement in light yield, energy resolution, scintillation decay-time and afterglow is achieved in melt-aged SrI 2:Eu 2+. The correlation between performance improvement and defect structure is addressed. The results of this paper lead to a better understanding of the effects of defect engineering in control and optimization of metal halide scintillators using the melt aging technique.« less

  9. [Preparation of scopolamine hydrobromide nanoparticles-in-microsphere system].

    PubMed

    Lü, Wei-ling; Hu, Jin-hong; Zhu, Quan-gang; Li, Feng-qian

    2010-07-01

    This study is to prepare scopolamine hydrobromide nanoparticles-in-microsphere system (SH-NiMS) and evaluate its drug release characteristics in vitro. SH nanoparticles were prepared by ionic crosslinking method with tripolyphosphate (TPP) as crosslinker and chitosan as carrier. Orthogonal design was used to optimize the formulation of SH nanoparticles, which took the property of encapsulation efficiency and drug loading as evaluation parameters. With HPMC as carrier, adjusted the parameters of spray drying technique and sprayed the SH nanoparticles in microspheres encaposulated by HPMC was formed and which is called nanoparticles-in-microsphere system (NiMS). SH-NiMS appearances were observed by SEM, structure was obsearved by FT-IR and the release characteristics in vitro were evaluated. The optimized formulation of SH nanoparticles was TPP/CS 1:3 (w/w), HPMC 0.3%, SH 0.2%. The solution peristaltic speed of the spray drying technique was adjusted to 15%, and the temperature of inlet was 110 degrees C. The encapsulation product yeild, drug loading and particle sizes of SH-NiMS were 94.2%, 20.4%, and 1256.5 nm, respectively. The appearances and the structure of SH-NiMS were good. The preparation method of SH-NiMS is stable and reliable to use, which provide a new way to develop new dosage form.

  10. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  11. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less

  12. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  13. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  14. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  15. Multichannel-Hadamard calibration of high-order adaptive optics systems.

    PubMed

    Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai

    2014-06-02

    we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.

  16. Evolutionary learning processes as the foundation for behaviour change.

    PubMed

    Crutzen, Rik; Peters, Gjalt-Jorn Ygram

    2018-03-01

    We argue that the active ingredients of behaviour change interventions, often called behaviour change methods (BCMs) or techniques (BCTs), can usefully be placed on a dimension of psychological aggregation. We introduce evolutionary learning processes (ELPs) as fundamental building blocks that are on a lower level of psychological aggregation than BCMs/BCTs. A better understanding of ELPs is useful to select the appropriate BCMs/BCTs to target determinants of behaviour, or vice versa, to identify potential determinants targeted by a given BCM/BCT, and to optimally translate them into practical applications. Using these insights during intervention development may increase the likelihood of developing effective interventions - both in terms of behaviour change as well as maintenance of behaviour change.

  17. Performance Analysis and Design Synthesis (PADS) computer program. Volume 2: Program description, part 2

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The QL module of the Performance Analysis and Design Synthesis (PADS) computer program is described. Execution of this module is initiated when and if subroutine PADSI calls subroutine GROPE. Subroutine GROPE controls the high level logical flow of the QL module. The purpose of the module is to determine a trajectory that satisfies the necessary variational conditions for optimal performance. The module achieves this by solving a nonlinear multi-point boundary value problem. The numerical method employed is described. It is an iterative technique that converges quadratically when it does converge. The three basic steps of the module are: (1) initialization, (2) iteration, and (3) culmination. For Volume 1 see N73-13199.

  18. Experimental Semiautonomous Vehicle

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.; Mishkin, Andrew H.; Litwin, Todd E.; Matthies, Larry H.; Cooper, Brian K.; Nguyen, Tam T.; Gat, Erann; Gennery, Donald B.; Firby, Robert J.; Miller, David P.; hide

    1993-01-01

    Semiautonomous rover vehicle serves as testbed for evaluation of navigation and obstacle-avoidance techniques. Designed to traverse variety of terrains. Concepts developed applicable to robots for service in dangerous environments as well as to robots for exploration of remote planets. Called Robby, vehicle 4 m long and 2 m wide, with six 1-m-diameter wheels. Mass of 1,200 kg and surmounts obstacles as large as 1 1/2 m. Optimized for development of machine-vision-based strategies and equipped with complement of vision and direction sensors and image-processing computers. Front and rear cabs steer and roll with respect to centerline of vehicle. Vehicle also pivots about central axle, so wheels comply with almost any terrain.

  19. Sound quality recognition using optimal wavelet-packet transform and artificial neural network methods

    NASA Astrophysics Data System (ADS)

    Xing, Y. F.; Wang, Y. S.; Shi, L.; Guo, H.; Chen, H.

    2016-01-01

    According to the human perceptional characteristics, a method combined by the optimal wavelet-packet transform and artificial neural network, so-called OWPT-ANN model, for psychoacoustical recognition is presented. Comparisons of time-frequency analysis methods are performed, and an OWPT with 21 critical bands is designed for feature extraction of a sound, as is a three-layer back-propagation ANN for sound quality (SQ) recognition. Focusing on the loudness and sharpness, the OWPT-ANN model is applied on vehicle noises under different working conditions. Experimental verifications show that the OWPT can effectively transfer a sound into a time-varying energy pattern as that in the human auditory system. The errors of loudness and sharpness of vehicle noise from the OWPT-ANN are all less than 5%, which suggest a good accuracy of the OWPT-ANN model in SQ recognition. The proposed methodology might be regarded as a promising technique for signal processing in the human-hearing related fields in engineering.

  20. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  1. Label consistent K-SVD: learning a discriminative dictionary for recognition.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2013-11-01

    A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.

  2. On the hydrophilicity of electrodes for capacitive energy extraction

    NASA Astrophysics Data System (ADS)

    Lian, Cheng; Kong, Xian; Liu, Honglai; Wu, Jianzhong

    2016-11-01

    The so-called Capmix technique for energy extraction is based on the cyclic expansion of electrical double layers to harvest dissipative energy arising from the salinity difference between freshwater and seawater. Its optimal performance requires a careful selection of the electrical potentials for the charging and discharging processes, which must be matched with the pore characteristics of the electrode materials. While a number of recent studies have examined the effects of the electrode pore size and geometry on the capacitive energy extraction processes, there is little knowledge on how the surface properties of the electrodes affect the thermodynamic efficiency. In this work, we investigate the Capmix processes using the classical density functional theory for a realistic model of electrolyte solutions. The theoretical predictions allow us to identify optimal operation parameters for capacitive energy extraction with porous electrodes of different surface hydrophobicity. In agreement with recent experiments, we find that the thermodynamic efficiency can be much improved by using most hydrophilic electrodes.

  3. Coded excitation with spectrum inversion (CEXSI) for ultrasound array imaging.

    PubMed

    Wang, Yao; Metzger, Kurt; Stephens, Douglas N; Williams, Gregory; Brownlie, Scott; O'Donnell, Matthew

    2003-07-01

    In this paper, a scheme called coded excitation with spectrum inversion (CEXSI) is presented. An established optimal binary code whose spectrum has no nulls and possesses the least variation is encoded as a burst for transmission. Using this optimal code, the decoding filter can be derived directly from its inverse spectrum. Various transmission techniques can be used to improve energy coupling within the system pass-band. We demonstrate its potential to achieve excellent decoding with very low (< 80 dB) side-lobes. For a 2.6 micros code, an array element with a center frequency of 10 MHz and fractional bandwidth of 38%, range side-lobes of about 40 dB have been achieved experimentally with little compromise in range resolution. The signal-to-noise ratio (SNR) improvement also has been characterized at about 14 dB. Along with simulations and experimental data, we present a formulation of the scheme, according to which CEXSI can be extended to improve SNR in sparse array imaging in general.

  4. On the hydrophilicity of electrodes for capacitive energy extraction

    DOE PAGES

    Lian, Cheng; East China Univ. of Science and Technology, Shanghai; Kong, Xian; ...

    2016-09-14

    The so-called Capmix technique for energy extraction is based on the cyclic expansion of electrical double layers to harvest dissipative energy arising from the salinity difference between freshwater and seawater. Its optimal performance requires a careful selection of the electrical potentials for the charging and discharging processes, which must be matched with the pore characteristics of the electrode materials. While a number of recent studies have examined the effects of the electrode pore size and geometry on the capacitive energy extraction processes, there is little knowledge on how the surface properties of the electrodes affect the thermodynamic efficiency. In thismore » paper, we investigate the Capmix processes using the classical density functional theory for a realistic model of electrolyte solutions. The theoretical predictions allow us to identify optimal operation parameters for capacitive energy extraction with porous electrodes of different surface hydrophobicity. Finally, in agreement with recent experiments, we find that the thermodynamic efficiency can be much improved by using most hydrophilic electrodes.« less

  5. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  6. Fundamentals and techniques of nonimaging optics research

    NASA Astrophysics Data System (ADS)

    Winston, R.; Ogallagher, J.

    1987-07-01

    Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line or trumpet concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. Present efforts can be classed into two main areas: (1) classical geometrical nonimaging optics, and (2) logical extensions of nonimaging concepts to the physical optics domain.

  7. Fundamentals and techniques of nonimaging optics research at the University of Chicago

    NASA Astrophysics Data System (ADS)

    Winston, R.; Ogallagher, J.

    1986-11-01

    Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. In the most recent phase, our efforts can be classed into two main areas; (a) ''classical'' geometrical nonimaging optics; and (b) logical extensions of nonimaging concepts to the physical optics domain.

  8. Development and Testing of Control Laws for the Active Aeroelastic Wing Program

    NASA Technical Reports Server (NTRS)

    Dibley, Ryan P.; Allen, Michael J.; Clarke, Robert; Gera, Joseph; Hodgkinson, John

    2005-01-01

    The Active Aeroelastic Wing research program was a joint program between the U.S. Air Force Research Laboratory and NASA established to investigate the characteristics of an aeroelastic wing and the technique of using wing twist for roll control. The flight test program employed the use of an F/A-18 aircraft modified by reducing the wing torsional stiffness and adding a custom research flight control system. The research flight control system was optimized to maximize roll rate using only wing surfaces to twist the wing while simultaneously maintaining design load limits, stability margins, and handling qualities. NASA Dryden Flight Research Center developed control laws using the software design tool called CONDUIT, which employs a multi-objective function optimization to tune selected control system design parameters. Modifications were made to the Active Aeroelastic Wing implementation in this new software design tool to incorporate the NASA Dryden Flight Research Center nonlinear F/A-18 simulation for time history analysis. This paper describes the design process, including how the control law requirements were incorporated into constraints for the optimization of this specific software design tool. Predicted performance is also compared to results from flight.

  9. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training

    PubMed Central

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R.

    2015-01-01

    Background. Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. Methods. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Results. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. Conclusion. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines. PMID:26417378

  10. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training.

    PubMed

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R; Lorenzetti, Silvio

    2015-01-01

    Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines.

  11. Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises

    PubMed Central

    Grama, Ion; Liu, Quansheng

    2017-01-01

    In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667

  12. Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.

    PubMed

    Jin, Qiyu; Grama, Ion; Liu, Quansheng

    2017-01-01

    In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.

  13. Harnessing color vision for visual oximetry in central cyanosis.

    PubMed

    Changizi, Mark; Rio, Kevin

    2010-01-01

    Central cyanosis refers to a bluish discoloration of the skin, lips, tongue, nails, and mucous membranes, and is due to poor arterial oxygenation. Although skin color is one of its characteristic properties, it has long been realized that by the time skin color signs become visible, oxygen saturation is dangerously low. Here we investigate the visibility of cyanosis in light of recent discoveries on what color vision evolved for in primates. We elucidate why low arterial oxygenation is visible at all, why it is perceived as blue, and why it can be so difficult to perceive. With a better understanding of the relationship between color vision and blood physiology, we suggest two simple techniques for greatly enhancing the clinician's ability to detect cyanosis and other clinical color changes. The first is called "skin-tone adaptation", wherein sheets, gowns, walls and other materials near a patient have a color close to that of the patient's skin, thereby optimizing a color-normal viewer's ability to sense skin color modulations. The second technique is called "biosensor color tabs", wherein adhesive tabs with a color matching the patient's skin tone are placed in several spots on the skin, and subsequent skin color changes have the effect of making the initially-invisible tabs change color, their hue and saturation indicating the direction and magnitude of the skin color shift.

  14. Chopped random-basis quantum optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone

    2011-08-15

    In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.

  15. Strategies for Fermentation Medium Optimization: An In-Depth Review

    PubMed Central

    Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.

    2017-01-01

    Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566

  16. New evidence favoring multilevel decomposition and optimization

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Polignone, Debra A.

    1990-01-01

    The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.

  17. Shape optimization of solid-air porous phononic crystal slabs with widest full 3D bandgap for in-plane acoustic waves

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Luca; Bahr, Bichoy; Daniel, Luca; Weinstein, Dana; Ardito, Raffaele

    2017-09-01

    The use of Phononic Crystals (PnCs) as smart materials in structures and microstructures is growing due to their tunable dynamical properties and to the wide range of possible applications. PnCs are periodic structures that exhibit elastic wave scattering for a certain band of frequencies (called bandgap), depending on the geometric and material properties of the fundamental unit cell of the crystal. PnCs slabs can be represented by plane-extruded structures composed of a single material with periodic perforations. Such a configuration is very interesting, especially in Micro Electro-Mechanical Systems industry, due to the easy fabrication procedure. A lot of topologies can be found in the literature for PnCs with square-symmetric unit cell that exhibit complete 2D bandgaps; however, due to the application demand, it is desirable to find the best topologies in order to guarantee full bandgaps referred to in-plane wave propagation in the complete 3D structure. In this work, by means of a novel and fast implementation of the Bidirectional Evolutionary Structural Optimization technique, shape optimization is conducted on the hole shape obtaining several topologies, also with non-square-symmetric unit cell, endowed with complete 3D full bandgaps for in-plane waves. Model order reduction technique is adopted to reduce the computational time in the wave dispersion analysis. The 3D features of the PnC unit cell endowed with the widest full bandgap are then completely analyzed, paying attention to engineering design issues.

  18. Optimal and robust control of a class of nonlinear systems using dynamically re-optimised single network adaptive critic design

    NASA Astrophysics Data System (ADS)

    Tiwari, Shivendra N.; Padhi, Radhakant

    2018-01-01

    Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.

  19. Analytical dual-energy microtomography: A new method for obtaining three-dimensional mineral phase images and its application to Hayabusa samples

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, A.; Nakano, T.; Uesugi, K.; Uesugi, M.; Takeuchi, A.; Suzuki, Y.; Noguchi, R.; Matsumoto, T.; Matsuno, J.; Nagano, T.; Imai, Y.; Nakamura, T.; Ogami, T.; Noguchi, T.; Abe, M.; Yada, T.; Fujimura, A.

    2013-09-01

    We developed a novel technique called "analytical dual-energy microtomography" that uses the linear attenuation coefficients (LACs) of minerals at two different X-ray energies to nondestructively obtain three-dimensional (3D) images of mineral distribution in materials such as rock specimens. The two energies are above and below the absorption edge energy of an abundant element, which we call the "index element". The chemical compositions of minerals forming solid solution series can also be measured. The optimal size of a sample is of the order of the inverse of the LAC values at the X-ray energies used. We used synchrotron-based microtomography with an effective spatial resolution of >200 nm to apply this method to small particles (30-180 μm) collected from the surface of asteroid 25143 Itokawa by the Hayabusa mission of the Japan Aerospace Exploration Agency (JAXA). A 3D distribution of the minerals was successively obtained by imaging the samples at X-ray energies of 7 and 8 keV, using Fe as the index element (the K-absorption edge of Fe is 7.11 keV). The optimal sample size in this case is of the order of 50 μm. The chemical compositions of the minerals, including the Fe/Mg ratios of ferromagnesian minerals and the Na/Ca ratios of plagioclase, were measured. This new method is potentially applicable to other small samples such as cosmic dust, lunar regolith, cometary dust (recovered by the Stardust mission of the National Aeronautics and Space Administration [NASA]), and samples from extraterrestrial bodies (those from future sample return missions such as the JAXA Hayabusa2 mission and the NASA OSIRIS-REx mission), although limitations exist for unequilibrated samples. Further, this technique is generally suited for studying materials in multicomponent systems with multiple phases across several research fields.

  20. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  1. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  2. Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.

    PubMed

    Wang, Haizhou; Song, Mingzhou

    2011-12-01

    The heuristic k -means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp . We demonstrate its advantage in optimality and runtime over the standard iterative k -means algorithm.

  3. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  4. Summary of Optimization Techniques That Can Be Applied to Suspension System Design

    DOT National Transportation Integrated Search

    1973-03-01

    Summaries are presented of the analytic techniques available for three levitated vehicle suspension optimization problems: optimization of passive elements for fixed configuration; optimization of a free passive configuration; optimization of a free ...

  5. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  6. Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1

    NASA Astrophysics Data System (ADS)

    Langenbrunner, B.; Neelin, J. D.

    2017-09-01

    Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.

  7. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  8. Adaptive infinite impulse response system identification using modified-interior search algorithm with Lèvy flight.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva

    2017-03-01

    In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Quantum computing gates via optimal control

    NASA Astrophysics Data System (ADS)

    Atia, Yosi; Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2014-10-01

    We demonstrate the use of optimal control to design two entropy-manipulating quantum gates which are more complex than the corresponding, commonly used, gates, such as CNOT and Toffoli (CCNOT): A two-qubit gate called polarization exchange (PE) and a three-qubit gate called polarization compression (COMP) were designed using GRAPE, an optimal control algorithm. Both gates were designed for a three-spin system. Our design provided efficient and robust nuclear magnetic resonance (NMR) radio frequency (RF) pulses for 13C2-trichloroethylene (TCE), our chosen three-spin system. We then experimentally applied these two quantum gates onto TCE at the NMR lab. Such design of these gates and others could be relevant for near-future applications of quantum computing devices.

  10. Ligand-protein docking using a quantum stochastic tunneling optimization method.

    PubMed

    Mancera, Ricardo L; Källblad, Per; Todorov, Nikolay P

    2004-04-30

    A novel hybrid optimization method called quantum stochastic tunneling has been recently introduced. Here, we report its implementation within a new docking program called EasyDock and a validation with the CCDC/Astex data set of ligand-protein complexes using the PLP score to represent the ligand-protein potential energy surface and ScreenScore to score the ligand-protein binding energies. When taking the top energy-ranked ligand binding mode pose, we were able to predict the correct crystallographic ligand binding mode in up to 75% of the cases. By using this novel optimization method run times for typical docking simulations are significantly shortened. Copyright 2004 Wiley Periodicals, Inc. J Comput Chem 25: 858-864, 2004

  11. A cross-correlation search for intermediate-duration gravitational waves from GRB magnetars

    NASA Astrophysics Data System (ADS)

    Coyne, Robert

    2015-04-01

    Since the discovery of the afterglow in 1997, the progress made in our understanding of gamma-ray bursts (GRBs) has been spectacular. Yet a direct proof of GRB progenitors is still missing. In the last few years, evidence for a long-lived and sustained central engine in GRBs has mounted. This has called attention to the so-called millisecond-magnetar model, which proposes that a highly magnetized, rapidly-rotating neutron star may exist at the heart of some of these events. The advent of advanced gravitational wave detectors such as LIGO and Virgo may enable us to probe directly, for the first time, the nature of GRB progenitors and their byproducts. In this context, we describe a novel application of a generalized cross-correlation technique optimized for the detection of long-duration gravitational wave signals that may be associated with bar-like deformations of GRB magnetars. The detection of these signals would allow us to answer some of the most intriguing questions on the nature of GRB progenitors, and serve as a starting point for a new class of intermediate-duration gravitational wave searches.

  12. Fuzzy Mixed Assembly Line Sequencing and Scheduling Optimization Model Using Multiobjective Dynamic Fuzzy GA

    PubMed Central

    Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari

    2014-01-01

    A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962

  13. Strategies and trajectories of coral reef fish larvae optimizing self-recruitment.

    PubMed

    Irisson, Jean-Olivier; LeVan, Anselme; De Lara, Michel; Planes, Serge

    2004-03-21

    Like many marine organisms, most coral reef fishes have a dispersive larval phase. The fate of this phase is of great concern for their ecology as it may determine population demography and connectivity. As direct study of the larval phase is difficult, we tackle the question of dispersion from an opposite point of view and study self-recruitment. In this paper, we propose a mathematical model of the pelagic phase, parameterized by a limited number of factors (currents, predator and prey distributions, energy budgets) and which focuses on the behavioral response of the larvae to these factors. We evaluate optimal behavioral strategies of the larvae (i.e. strategies that maximize the probability of return to the natal reef) and examine the trajectories of dispersal that they induce. Mathematically, larval behavior is described by a controlled Markov process. A strategy induces a sequence, indexed by time steps, of "decisions" (e.g. looking for food, swimming in a given direction). Biological, physical and topographic constraints are captured through the transition probabilities and the sets of possible decisions. Optimal strategies are found by means of the so-called stochastic dynamic programming equation. A computer program is developed and optimal decisions and trajectories are numerically derived. We conclude that this technique can be considered as a good tool to represent plausible larval behaviors and that it has great potential in terms of theoretical investigations and also for field applications.

  14. Optimized energy harvesting from mechanical vibrations through piezoelectric actuators, based on a synchronized switching technique

    NASA Astrophysics Data System (ADS)

    Tsampas, P.; Roditis, G.; Papadimitriou, V.; Chatzakos, P.; Gan, Tat-Hean

    2013-05-01

    Increasing demand in mobile, autonomous devices has made energy harvesting a particular point of interest. Systems that can be powered up by a few hundreds of microwatts could feature their own energy extraction module. Energy can be harvested from the environment close to the device. Particularly, the ambient mechanical vibrations conversion via piezoelectric transducers is one of the most investigated fields for energy harvesting. A technique for optimized energy harvesting using piezoelectric actuators called "Synchronized Switching Harvesting" is explored. Comparing to a typical full bridge rectifier, the proposed harvesting technique can highly improve harvesting efficiency, even in a significantly extended frequency window around the piezoelectric actuator's resonance. In this paper, the concept of design, theoretical analysis, modeling, implementation and experimental results using CEDRAT's APA 400M-MD piezoelectric actuator are presented in detail. Moreover, we suggest design guidelines for optimum selection of the storage unit in direct relation to the characteristics of the random vibrations. From a practical aspect, the harvesting unit is based on dedicated electronics that continuously sense the charge level of the actuator's piezoelectric element. When the charge is sensed, to come to a maximum, it is directed to speedily flow into a storage unit. Special care is taken so that electronics operate at low voltages consuming a very small amount of the energy stored. The final prototype developed includes the harvesting circuit implemented with miniaturized, low cost and low consumption electronics and a storage unit consisting of a super capacitors array, forming a truly self-powered system drawing energy from ambient random vibrations of a wide range of characteristics.

  15. Inclusion of tank configurations as a variable in the cost optimization of branched piped-water networks

    NASA Astrophysics Data System (ADS)

    Hooda, Nikhil; Damani, Om

    2017-06-01

    The classic problem of the capital cost optimization of branched piped networks consists of choosing pipe diameters for each pipe in the network from a discrete set of commercially available pipe diameters. Each pipe in the network can consist of multiple segments of differing diameters. Water networks also consist of intermediate tanks that act as buffers between incoming flow from the primary source and the outgoing flow to the demand nodes. The network from the primary source to the tanks is called the primary network, and the network from the tanks to the demand nodes is called the secondary network. During the design stage, the primary and secondary networks are optimized separately, with the tanks acting as demand nodes for the primary network. Typically the choice of tank locations, their elevations, and the set of demand nodes to be served by different tanks is manually made in an ad hoc fashion before any optimization is done. It is desirable therefore to include this tank configuration choice in the cost optimization process itself. In this work, we explain why the choice of tank configuration is important to the design of a network and describe an integer linear program model that integrates the tank configuration to the standard pipe diameter selection problem. In order to aid the designers of piped-water networks, the improved cost optimization formulation is incorporated into our existing network design system called JalTantra.

  16. Microemulsion-based lycopene extraction: Effect of surfactants, co-surfactants and pretreatments.

    PubMed

    Amiri-Rigi, Atefeh; Abbasi, Soleiman

    2016-04-15

    Lycopene is a potent antioxidant that has received extensive attention recently. Due to the challenges encountered with current methods of lycopene extraction using hazardous solvents, industry calls for a greener, safer and more efficient process. The main purpose of present study was application of microemulsion technique to extract lycopene from tomato pomace. In this respect, the effect of eight different surfactants, four different co-surfactants, and ultrasound and enzyme pretreatments on lycopene extraction efficiency was examined. Experimental results revealed that application of combined ultrasound and enzyme pretreatments, saponin as a natural surfactant, and glycerol as a co-surfactant, in the bicontinuous region of microemulsion was the optimal experimental conditions resulting in a microemulsion containing 409.68±0.68 μg/glycopene. The high lycopene concentration achieved, indicates that microemulsion technique, using a low-cost natural surfactant could be promising for a simple and safe separation of lycopene from tomato pomace and possibly from tomato industrial wastes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. A 3D Model for Eddy Current Inspection in Aeronautics: Application to Riveted Structures

    NASA Astrophysics Data System (ADS)

    Paillard, S.; Pichenot, G.; Lambert, M.; Voillaume, H.; Dominguez, N.

    2007-03-01

    Eddy current technique is currently an operational tool used for fastener inspection which is an important issue for the maintenance of aircraft structures. The industry calls for faster, more sensitive and reliable NDT techniques for the detection and characterization of potential flaws nearby rivet. In order to reduce the development time and to optimize the design and the performances assessment of an inspection procedure, the CEA and EADS have started a collaborative work aiming at extending the modeling features of the CIVA non destructive simulation plat-form in order to handle the configuration of a layered planar structure with a rivet and an embedded flaw nearby. Therefore, an approach based on the Volume Integral Method using the Green dyadic formalism which greatly increases computation efficiency has been developed. The first step, modeling the rivet without flaw as a hole in a multi-stratified structure, has been reached and validated in several configurations with experimental data.

  18. Low-cost capacitor voltage inverter for outstanding performance in piezoelectric energy harvesting.

    PubMed

    Lallart, Mickaël; Garbuio, Lauric; Richard, Claude; Guyomar, Daniel

    2010-01-01

    The purpose of this paper is to propose a new scheme for piezoelectric energy harvesting optimization. The proposed enhancement relies on a new topology for inverting the voltage across a single capacitor with reduced losses. The increase of the inversion quality allows a much more effective energy harvesting process using the so-called synchronized switch harvesting on inductor (SSHI) nonlinear technique. It is shown that the proposed architecture, based on a 2-step inversion, increases the harvested power by a theoretical factor up to square root of 2 (i.e., 40% gain) compared with classical SSHI, allowing an increase of the harvested power by a factor greater than 1000% compared with the standard energy harvesting technique for realistic values of inversion components. The proposed circuit, using only 4 digital switches and an intermediate capacitor, is also ultra-low power, because the inversion circuit does not require any external energy and the command signals are very simple.

  19. Enabling technologies and green processes in cyclodextrin chemistry.

    PubMed

    Cravotto, Giancarlo; Caporaso, Marina; Jicsinszky, Laszlo; Martina, Katia

    2016-01-01

    The design of efficient synthetic green strategies for the selective modification of cyclodextrins (CDs) is still a challenging task. Outstanding results have been achieved in recent years by means of so-called enabling technologies, such as microwaves, ultrasound and ball mills, that have become irreplaceable tools in the synthesis of CD derivatives. Several examples of sonochemical selective modification of native α-, β- and γ-CDs have been reported including heterogeneous phase Pd- and Cu-catalysed hydrogenations and couplings. Microwave irradiation has emerged as the technique of choice for the production of highly substituted CD derivatives, CD grafted materials and polymers. Mechanochemical methods have successfully furnished greener, solvent-free syntheses and efficient complexation, while flow microreactors may well improve the repeatability and optimization of critical synthetic protocols.

  20. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.

    PubMed

    Yokoyama, Jun'ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.

  1. Addressing Climate Change in Long-Term Water Planning Using Robust Decisionmaking

    NASA Astrophysics Data System (ADS)

    Groves, D. G.; Lempert, R.

    2008-12-01

    Addressing climate change in long-term natural resource planning is difficult because future management conditions are deeply uncertain and the range of possible adaptation options are so extensive. These conditions pose challenges to standard optimization decision-support techniques. This talk will describe a methodology called Robust Decisionmaking (RDM) that can complement more traditional analytic approaches by utilizing screening-level water management models to evaluate large numbers of strategies against a wide range of plausible future scenarios. The presentation will describe a recent application of the methodology to evaluate climate adaptation strategies for the Inland Empire Utilities Agency in Southern California. This project found that RDM can provide a useful way for addressing climate change uncertainty and identify robust adaptation strategies.

  2. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  3. CO2 Capture Using Electric Fields: Low-Cost Electrochromic Film on Plastic for Net-Zero Energy Building

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-01-01

    Broad Funding Opportunity Announcement Project: Two faculty members at Lehigh University created a new technique called supercapacitive swing adsorption (SSA) that uses electrical charges to encourage materials to capture and release CO2. Current CO2 capture methods include expensive processes that involve changes in temperature or pressure. Lehigh University’s approach uses electric fields to improve the ability of inexpensive carbon sorbents to trap CO2. Because this process uses electric fields and not electric current, the overall energy consumption is projected to be much lower than conventional methods. Lehigh University is now optimizing the materials to maximize CO2 capture and minimize themore » energy needed for the process.« less

  4. A numerical study of different projection-based model reduction techniques applied to computational homogenisation

    NASA Astrophysics Data System (ADS)

    Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia

    2017-10-01

    Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.

  5. Performance Assessment of Different Pulse Reconstruction Algorithms for the ATHENA X-Ray Integral Field Unit

    NASA Technical Reports Server (NTRS)

    Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle; hide

    2016-01-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  6. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  7. Quantitative evaluation of manufacturability and performance for ILT produced mask shapes using a single-objective function

    NASA Astrophysics Data System (ADS)

    Choi, Heon; Wang, Wei-long; Kallingal, Chidam

    2015-03-01

    The continuous scaling of semiconductor devices is quickly outpacing the resolution improvements of lithographic exposure tools and processes. This one-sided progression has pushed optical lithography to its limits, resulting in the use of well-known techniques such as Sub-Resolution Assist Features (SRAF's), Source-Mask Optimization (SMO), and double-patterning, to name a few. These techniques, belonging to a larger category of Resolution Enhancement Techniques (RET), have extended the resolution capabilities of optical lithography at the cost of increasing mask complexity, and therefore cost. One such technique, called Inverse Lithography Technique (ILT), has attracted much attention for its ability to produce the best possible theoretical mask design. ILT treats the mask design process as an inverse problem, where the known transformation from mask to wafer is carried out backwards using a rigorous mathematical approach. One practical problem in the application of ILT is the resulting contour-like mask shapes that must be "Manhattanized" (composed of straight edges and 90-deg corners) in order to produce a manufacturable mask. This conversion process inherently degrades the mask quality as it is a departure from the "optimal mask" represented by the continuously curved shapes produced by ILT. However, simpler masks composed of longer straight edges reduce the mask cost as it lowers the shot count and saves mask writing time during mask fabrication, resulting in a conflict between manufacturability and performance for ILT produced masks1,2. In this study, various commonly used metrics will be combined into an objective function to produce a single number to quantitatively measure a particular ILT solution's ability to balance mask manufacturability and RET performance. Several metrics that relate to mask manufacturing costs (i.e. mask vertex count, ILT computation runtime) are appropriately weighted against metrics that represent RET capability (i.e. process-variation band, edge-placement-error) in order to reflect the desired practical balance. This well-defined scoring system allows direct comparison of several masks with varying degrees of complexities. Using this method, ILT masks produced with increasing mask constraints will be compared, and it will be demonstrated that using the smallest minimum width for mask shapes does not always produce the optimal solution.

  8. Convergent evolution of mechanically optimal locomotion in aquatic invertebrates and vertebrates.

    PubMed

    Bale, Rahul; Neveln, Izaak D; Bhalla, Amneet Pal Singh; MacIver, Malcolm A; Patankar, Neelesh A

    2015-04-01

    Examples of animals evolving similar traits despite the absence of that trait in the last common ancestor, such as the wing and camera-type lens eye in vertebrates and invertebrates, are called cases of convergent evolution. Instances of convergent evolution of locomotory patterns that quantitatively agree with the mechanically optimal solution are very rare. Here, we show that, with respect to a very diverse group of aquatic animals, a mechanically optimal method of swimming with elongated fins has evolved independently at least eight times in both vertebrate and invertebrate swimmers across three different phyla. Specifically, if we take the length of an undulation along an animal's fin during swimming and divide it by the mean amplitude of undulations along the fin length, the result is consistently around twenty. We call this value the optimal specific wavelength (OSW). We show that the OSW maximizes the force generated by the body, which also maximizes swimming speed. We hypothesize a mechanical basis for this optimality and suggest reasons for its repeated emergence through evolution.

  9. Three Reading Comprehension Strategies: TELLS, Story Mapping, and QARs.

    ERIC Educational Resources Information Center

    Sorrell, Adrian L.

    1990-01-01

    Three reading comprehension strategies are presented to assist learning-disabled students: an advance organizer technique called "TELLS Fact or Fiction" used before reading a passage, a schema-based technique called "Story Mapping" used while reading, and a postreading method of categorizing questions called…

  10. Collective Responsibility, Academic Optimism, and Student Achievement in Taiwan Elementary Schools

    ERIC Educational Resources Information Center

    Wu, Hsin-Chieh

    2012-01-01

    Previous research indicates that collective efficacy, faculty trust in students and parents, and academic emphasis together formed a single latent school construct, called academic optimism. In the U.S., academic optimism has been proven to be a powerful construct that could effectively predict student achievement even after controlling for…

  11. An EGO-like optimization framework for sensor placement optimization in modal analysis

    NASA Astrophysics Data System (ADS)

    Morlier, Joseph; Basile, Aniello; Chiplunkar, Ankit; Charlotte, Miguel

    2018-07-01

    In aircraft design, ground/flight vibration tests are conducted to extract aircraft’s modal parameters (natural frequencies, damping ratios and mode shapes) also known as the modal basis. The main problem in aircraft modal identification is the large number of sensors needed, which increases operational time and costs. The goal of this paper is to minimize the number of sensors by optimizing their locations in order to reconstruct a truncated modal basis of N mode shapes with a high level of accuracy in the reconstruction. There are several methods to solve sensors placement optimization (SPO) problems, but for this case an original approach has been established based on an iterative process for mode shapes reconstruction through an adaptive Kriging metamodeling approach so called efficient global optimization (EGO)-SPO. The main idea in this publication is to solve an optimization problem where the sensors locations are variables and the objective function is defined by maximizing the trace of criteria so called AutoMAC. The results on a 2D wing demonstrate a reduction of sensors by 30% using our EGO-SPO strategy.

  12. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  13. Organizational Decision Making

    DTIC Science & Technology

    1975-08-01

    the lack of formal techniques typically used by large organizations, digress on the advantages of formal over informal... optimization ; for example one might do a number of optimization calculations, each time using a different measure of effectiveness as the optimized ...final decision. The next level of computer application involves the use of computerized optimization techniques. Optimization

  14. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  15. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE PAGES

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    2017-01-31

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  16. Optimal Analyses for 3×n AB Games in the Worst Case

    NASA Astrophysics Data System (ADS)

    Huang, Li-Te; Lin, Shun-Shii

    The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.

  17. Optimization of an exchange-correlation density functional for water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritz, Michelle; Fernández-Serra, Marivi; Institute for Advanced Computational Science, Stony Brook University, Stony Brook, New York 11794-3800

    2016-06-14

    We describe a method, that we call data projection onto parameter space (DPPS), to optimize an energy functional of the electron density, so that it reproduces a dataset of experimental magnitudes. Our scheme, based on Bayes theorem, constrains the optimized functional not to depart unphysically from existing ab initio functionals. The resulting functional maximizes the probability of being the “correct” parameterization of a given functional form, in the sense of Bayes theory. The application of DPPS to water sheds new light on why density functional theory has performed rather poorly for liquid water, on what improvements are needed, and onmore » the intrinsic limitations of the generalized gradient approximation to electron exchange and correlation. Finally, we present tests of our water-optimized functional, that we call vdW-DF-w, showing that it performs very well for a variety of condensed water systems.« less

  18. Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings

    NASA Astrophysics Data System (ADS)

    Mader, Charles Alexander

    A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.

  19. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  20. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  1. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  2. Energy weighting improves dose efficiency in clinical practice: implementation on a spectral photon-counting mammography system

    PubMed Central

    Berglund, Johan; Johansson, Henrik; Lundqvist, Mats; Cederström, Björn; Fredenberg, Erik

    2014-01-01

    Abstract. In x-ray imaging, contrast information content varies with photon energy. It is, therefore, possible to improve image quality by weighting photons according to energy. We have implemented and evaluated so-called energy weighting on a commercially available spectral photon-counting mammography system. The technique was evaluated using computer simulations, phantom experiments, and analysis of screening mammograms. The CNR benefit of energy weighting for a number of relevant target-background combinations measured by the three methods fell in the range of 2.2 to 5.2% when using optimal weight factors. This translates to a potential dose reduction at constant CNR in the range of 4.5 to 11%. We expect the choice of weight factor in practical implementations to be straightforward because (1) the CNR improvement was not very sensitive to weight, (2) the optimal weight was similar for all investigated target-background combinations, (3) aluminum/PMMA phantoms were found to represent clinically relevant tasks well, and (4) the optimal weight could be calculated directly from pixel values in phantom images. Reasonable agreement was found between the simulations and phantom measurements. Manual measurements on microcalcifications and automatic image analysis confirmed that the CNR improvement was detectable in energy-weighted screening mammograms. PMID:26158045

  3. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  4. Classification of holter registers by dynamic clustering using multi-dimensional particle swarm optimization.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Pulkkinen, Jenni; Gabbouj, Moncef

    2010-01-01

    In this paper, we address dynamic clustering in high dimensional data or feature spaces as an optimization problem where multi-dimensional particle swarm optimization (MD PSO) is used to find out the true number of clusters, while fractional global best formation (FGBF) is applied to avoid local optima. Based on these techniques we then present a novel and personalized long-term ECG classification system, which addresses the problem of labeling the beats within a long-term ECG signal, known as Holter register, recorded from an individual patient. Due to the massive amount of ECG beats in a Holter register, visual inspection is quite difficult and cumbersome, if not impossible. Therefore the proposed system helps professionals to quickly and accurately diagnose any latent heart disease by examining only the representative beats (the so called master key-beats) each of which is representing a cluster of homogeneous (similar) beats. We tested the system on a benchmark database where the beats of each Holter register have been manually labeled by cardiologists. The selection of the right master key-beats is the key factor for achieving a highly accurate classification and the proposed systematic approach produced results that were consistent with the manual labels with 99.5% average accuracy, which basically shows the efficiency of the system.

  5. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    DOE PAGES

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; ...

    2016-07-08

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less

  6. Borrowing yet another technique from manufacturing, investigators find that 'operational flexibility' can offer dividends to ED operations.

    PubMed

    2015-03-01

    Through the use of a sophisticated modeling technique, investigators at the University of Cincinnati have found that the creation of a so-called "flex track" that includes beds that can be assigned to either high-acuity or Iow-acuity- patients has the potential to lower mean wait times for patients when it is i added to the traditional fast-track and high-acuity areas of a 50-bed ED that sees 85,000 patients per year. Investigators used discrete-event simulation to model the patient flow and characteristics of the ED at the University of Cincinnati Medical Center, and to test out various operational scenarios without disrupting real-world operations. The investigators concluded that patient wait times were lowest when three flex beds were appropriated from the 10-bed fast track area of the EDs. In light of the results, three flex rooms are being incorporated into a newly remodeled ED scheduled for completion laterthis spring. Investigators suggest the modeling technique could be useful to other EDs interested in optimizing their operational plans. Further, they suggest that ED administrators consider ways to introduce flexibility into departments that are now more rigidly divided between high- and low-acuity areas.

  7. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy.

    PubMed

    Tremsin, Anton S; Gao, Yan; Dial, Laura C; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.

  8. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with 100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.

  9. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less

  10. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    PubMed Central

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Abstract Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components. PMID:27877885

  11. Ontology-Driven Provenance Management in eScience: An Application in Parasite Research

    NASA Astrophysics Data System (ADS)

    Sahoo, Satya S.; Weatherly, D. Brent; Mutharaju, Raghava; Anantharam, Pramod; Sheth, Amit; Tarleton, Rick L.

    Provenance, from the French word "provenir", describes the lineage or history of a data entity. Provenance is critical information in scientific applications to verify experiment process, validate data quality and associate trust values with scientific results. Current industrial scale eScience projects require an end-to-end provenance management infrastructure. This infrastructure needs to be underpinned by formal semantics to enable analysis of large scale provenance information by software applications. Further, effective analysis of provenance information requires well-defined query mechanisms to support complex queries over large datasets. This paper introduces an ontology-driven provenance management infrastructure for biology experiment data, as part of the Semantic Problem Solving Environment (SPSE) for Trypanosoma cruzi (T.cruzi). This provenance infrastructure, called T.cruzi Provenance Management System (PMS), is underpinned by (a) a domain-specific provenance ontology called Parasite Experiment ontology, (b) specialized query operators for provenance analysis, and (c) a provenance query engine. The query engine uses a novel optimization technique based on materialized views called materialized provenance views (MPV) to scale with increasing data size and query complexity. This comprehensive ontology-driven provenance infrastructure not only allows effective tracking and management of ongoing experiments in the Tarleton Research Group at the Center for Tropical and Emerging Global Diseases (CTEGD), but also enables researchers to retrieve the complete provenance information of scientific results for publication in literature.

  12. The ground truth about metadata and community detection in networks.

    PubMed

    Peel, Leto; Larremore, Daniel B; Clauset, Aaron

    2017-05-01

    Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks' links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.

  13. Closed loop cavitation control - A step towards sonomechatronics.

    PubMed

    Saalbach, Kai-Alexander; Ohrdes, Hendrik; Twiefel, Jens

    2018-06-01

    In the field of sonochemistry, many processes are made possible by the generation of cavitation. This article is about closed loop control of ultrasound assisted processes with the aim of controlling the intensity of cavitation-based sonochemical processes. This is the basis for a new research field which the authors call "sonomechatronics". In order to apply closed loop control, a so called self-sensing technique is applied, which uses the ultrasound transducer's electrical signals to gain information about cavitation activity. Experiments are conducted to find out if this self-sensing technique is capable of determining the state and intensity of acoustic cavitation. A distinct frequency component in the transducer's current signal is found to be a good indicator for the onset and termination of transient cavitation. Measurements show that, depending on the boundary conditions, the onset and termination of transient cavitation occur at different thresholds, with the onset occurring at a higher value in most cases. This known hysteresis effect offers the additional possibility of achieving an energetic optimization by controlling cavitation generation. Using the cavitation indicator for the implementation of a double set point closed loop control, the mean driving current was reduced by approximately 15% compared to the value needed to exceed the transient cavitation threshold. The results presented show a great potential for the field of sonomechatronics. Nevertheless, further investigations are necessary in order to design application-specific sonomechatronic processes. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Counseling about turbuhaler technique: needs assessment and effective strategies for community pharmacists.

    PubMed

    Basheti, Iman A; Reddel, Helen K; Armour, Carol L; Bosnic-Anticevich, Sinthia Z

    2005-05-01

    Optimal effects of asthma medications are dependent on correct inhaler technique. In a telephone survey, 77/87 patients reported that their Turbuhaler technique had not been checked by a health care professional. In a subsequent pilot study, 26 patients were randomized to receive one of 3 Turbuhaler counseling techniques, administered in the community pharmacy. Turbuhaler technique was scored before and 2 weeks after counseling (optimal technique = score 9/9). At baseline, 0/26 patients had optimal technique. After 2 weeks, optimal technique was achieved by 0/7 patients receiving standard verbal counseling (A), 2/8 receiving verbal counseling augmented with emphasis on Turbuhaler position during priming (B), and 7/9 receiving augmented verbal counseling plus physical demonstration (C) (Fisher's exact test for A vs C, p = 0.006). Satisfactory technique (4 essential steps correct) also improved (A: 3/8 to 4/7; B: 2/9 to 5/8; and C: 1/9 to 9/9 patients) (A vs C, p = 0.1). Counseling in Turbuhaler use represents an important opportunity for community pharmacists to improve asthma management, but physical demonstration appears to be an important component to effective Turbuhaler training for educating patients toward optimal Turbuhaler technique.

  15. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  16. Designing of skull defect implants using C1 rational cubic Bezier and offset curves

    NASA Astrophysics Data System (ADS)

    Mohamed, Najihah; Majid, Ahmad Abd; Piah, Abd Rahni Mt; Rajion, Zainul Ahmad

    2015-05-01

    Some of the reasons to construct skull implant are due to head trauma after an accident or an injury or an infection or because of tumor invasion or when autogenous bone is not suitable for replacement after a decompressive craniectomy (DC). The main objective of our study is to develop a simple method to redesign missing parts of the skull. The procedure begins with segmentation, data approximation, and estimation process of the outer wall by a C1 continuous curve. Its offset curve is used to generate the inner wall. A metaheuristic algorithm, called harmony search (HS) is a derivative-free real parameter optimization algorithm inspired from the musical improvisation process of searching for a perfect state of harmony. In this study, data approximation by a rational cubic Bézier function uses HS to optimize position of middle points and value of the weights. All the phases contribute significantly in making our proposed technique automated. Graphical examples of several postoperative skulls are displayed to show the effectiveness of our proposed method.

  17. jFuzz: A Concolic Whitebox Fuzzer for Java

    NASA Technical Reports Server (NTRS)

    Jayaraman, Karthick; Harvison, David; Ganesh, Vijay; Kiezun, Adam

    2009-01-01

    We present jFuzz, a automatic testing tool for Java programs. jFuzz is a concolic whitebox fuzzer, built on the NASA Java PathFinder, an explicit-state Java model checker, and a framework for developing reliability and analysis tools for Java. Starting from a seed input, jFuzz automatically and systematically generates inputs that exercise new program paths. jFuzz uses a combination of concrete and symbolic execution, and constraint solving. Time spent on solving constraints can be significant. We implemented several well-known optimizations and name-independent caching, which aggressively normalizes the constraints to reduce the number of calls to the constraint solver. We present preliminary results due to the optimizations, and demonstrate the effectiveness of jFuzz in creating good test inputs. The source code of jFuzz is available as part of the NASA Java PathFinder. jFuzz is intended to be a research testbed for investigating new testing and analysis techniques based on concrete and symbolic execution. The source code of jFuzz is available as part of the NASA Java PathFinder.

  18. TReacLab: An object-oriented implementation of non-intrusive splitting methods to couple independent transport and geochemical software

    NASA Astrophysics Data System (ADS)

    Jara, Daniel; de Dreuzy, Jean-Raynald; Cochepin, Benoit

    2017-12-01

    Reactive transport modeling contributes to understand geophysical and geochemical processes in subsurface environments. Operator splitting methods have been proposed as non-intrusive coupling techniques that optimize the use of existing chemistry and transport codes. In this spirit, we propose a coupler relying on external geochemical and transport codes with appropriate operator segmentation that enables possible developments of additional splitting methods. We provide an object-oriented implementation in TReacLab developed in the MATLAB environment in a free open source frame with an accessible repository. TReacLab contains classical coupling methods, template interfaces and calling functions for two classical transport and reactive software (PHREEQC and COMSOL). It is tested on four classical benchmarks with homogeneous and heterogeneous reactions at equilibrium or kinetically-controlled. We show that full decoupling to the implementation level has a cost in terms of accuracy compared to more integrated and optimized codes. Use of non-intrusive implementations like TReacLab are still justified for coupling independent transport and chemical software at a minimal development effort but should be systematically and carefully assessed.

  19. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  20. Adaptive building skin structures

    NASA Astrophysics Data System (ADS)

    Del Grosso, A. E.; Basso, P.

    2010-12-01

    The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.

  1. Considering social and environmental concerns as reservoir operating objectives

    NASA Astrophysics Data System (ADS)

    Tilmant, A.; Georis, B.; Doulliez, P.

    2003-04-01

    Sustainability principles are now widely recognized as key criteria for water resource development schemes, such as hydroelectric and multipurpose reservoirs. Development decisions no longer rely solely on economic grounds, but also consider environmental and social concerns through the so-called environmental and social impact assessments. The objective of this paper is to show that environmental and social concerns can also be addressed in the management (operation) of existing or projected reservoir schemes. By either adequately exploiting the results of environmental and social impact assessments, or by carrying out survey of water users, experts and managers, efficient (Pareto optimal) reservoir operating rules can be derived using flexible mathematical programming techniques. By reformulating the problem as a multistage flexible constraint satisfaction problem, incommensurable and subjective operating objectives can contribute, along with classical economic objectives, to the determination of optimal release decisions. Employed in a simulation mode, the results can be used to assess the long-term impacts of various operating rules on the social well-being of affected populations as well as on the integrity of the environment. The methodology is illustrated with a reservoir reallocation problem in Chile.

  2. An Evolutionary Algorithm for Fast Intensity Based Image Matching Between Optical and SAR Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias

    2018-04-01

    This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.

  3. Dynamic Flow Management Problems in Air Transportation

    NASA Technical Reports Server (NTRS)

    Patterson, Sarah Stock

    1997-01-01

    In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer programming formulation, the solution of which generates feasible and near-optimal routes for individual flights. The algorithm, termed the Lagrangian Generation Algorithm, is used to solve practical problems in the southwestern portion of United States in which the solutions are within 1% of the corresponding lower bounds.

  4. Water-Energy Nexus: Examining The Crucial Connection Through Simulation Based Optimization

    NASA Astrophysics Data System (ADS)

    Erfani, T.; Tan, C. C.

    2014-12-01

    With a growing urbanisation and the emergence of climate change, the world is facing a more water constrained future. This phenomenon will have direct impacts on the resilience and performance of energy sector as water is playing a key role in electricity generation processes. As energy is becoming a thirstier resource and the pressure on finite water sources is increasing, modelling and analysing this closely interlinked and interdependent loop, called 'water-energy nexus' is becoming an important cross-disciplinary challenge. Conflict often arises in transboundary river where several countries share the same source of water to be used in productive sectors for economic growth. From the perspective of the upstream users, it would be ideal to store the water for hydropower generation and protect the city against drought whereas the downstream users need the supply of water for growth. This research use the case study on the transboundary Blue Nile River basin located in the Middle East where the Ethiopian government decided to invest on building a new dam to store the water and generate hydropower. This leads to an opposition by downstream users as they believe that the introduction of the dam would reduce the amount of water available downstream. This calls for a compromise management where the reservoir operating rules need to be derived considering the interdependencies between the resources available and the requirements proposed by all users. For this, we link multiobjective optimization algorithm to water-energy use simulation model to achieve effective management of the transboundary reservoir operating strategies. The objective functions aim to attain social and economic welfare by minimizing the deficit of water supply and maximizing the hydropower generation. The study helps to improve the policies by understanding the value of water and energy in their alternative uses. The results show how different optimal reservoir release rules generate different trade-off solutions inherently involved in upstream and downstream users requirements and decisions. This study stimulates the research in this context by using simulation based optimization techniques to manage for security for food, water and energy generation, which leads to improve sustainability and long-term political stability.

  5. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  6. Finite Set Control Transcription for Optimal Control Applications

    DTIC Science & Technology

    2009-05-01

    Figures 1.1 The Parameters of x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1 Categories of Optimization Algorithms ...Programming (NLP) algorithm , such as SNOPT2 (hereafter, called the optimizer). The Finite Set Control Transcription (FSCT) method is essentially a...artificial neural networks, ge- netic algorithms , or combinations thereof for analysis.4,5 Indeed, an actual biological neural network is an example of

  7. What's in a Grammar? Modeling Dominance and Optimization in Contact

    ERIC Educational Resources Information Center

    Sharma, Devyani

    2013-01-01

    Muysken's article is a timely call for us to seek deeper regularities in the bewildering diversity of language contact outcomes. His model provocatively suggests that most such outcomes can be subsumed under four speaker optimization strategies. I consider two aspects of the proposal here: the formalization in Optimality Theory (OT) and the…

  8. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  9. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  10. Optimization of freeform surfaces using intelligent deformation techniques for LED applications

    NASA Astrophysics Data System (ADS)

    Isaac, Annie Shalom; Neumann, Cornelius

    2018-04-01

    For many years, optical designers have great interests in designing efficient optimization algorithms to bring significant improvement to their initial design. However, the optimization is limited due to a large number of parameters present in the Non-uniform Rationaly b-Spline Surfaces. This limitation was overcome by an indirect technique known as optimization using freeform deformation (FFD). In this approach, the optical surface is placed inside a cubical grid. The vertices of this grid are modified, which deforms the underlying optical surface during the optimization. One of the challenges in this technique is the selection of appropriate vertices of the cubical grid. This is because these vertices share no relationship with the optical performance. When irrelevant vertices are selected, the computational complexity increases. Moreover, the surfaces created by them are not always feasible to manufacture, which is the same problem faced in any optimization technique while creating freeform surfaces. Therefore, this research addresses these two important issues and provides feasible design techniques to solve them. Finally, the proposed techniques are validated using two different illumination examples: street lighting lens and stop lamp for automobiles.

  11. Application of DuPont photopolymer films to automotive holographic display

    NASA Astrophysics Data System (ADS)

    Nakazawa, Norihito; Ono, Motoshi; Takeuchi, Shoichi; Sakurai, Hiromi; Hirano, Masahiro

    1998-03-01

    Automotive holographic head-up display (HUD) systems employing DuPont holographic photopolymer films are presented. Holographic materials for automotive application are exposed to severe environmental conditions and are required high performance. This paper describes the improvement of DuPont photopolymer films for the automotive use, critical technical issues such as optical design, external color and stray light. The holographic HUD combiner embedded in a windshield of an automobile has peculiar problems called external color. Diffraction light from holographic combiner makes its external color tone stimulative. We have introduced RGB three color recording and color simulation in order to improve the external color. A moderate external color tone was realized by the optimization in terms of wavelengths and diffraction efficiencies of the combiner hologram. The stray light called flare arises from a reflection by glass surface of windshield. We have developed two techniques to avoid the flare. First is a diffuser type trap beam guard hologram which reduces the intensity of the flare. Second is the optimization of the design of hologram so that the incident direction of flare is lower than the horizon line. As an example of automotive display a stand-alone type holographic HUD system attached on the dashboard of an automobile is demonstrated, which provides useful driving information such as route guidance. The display has a very simple optical system that consists of only a holographic combiner and a vacuum fluorescent display. Its thin body is only 35 mm high and does not obstruct driver's view. The display gives high contrast and wide image.

  12. Individual welfare maximization in electricity markets including consumer and full transmission system modeling

    NASA Astrophysics Data System (ADS)

    Weber, James Daniel

    1999-11-01

    This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is demonstrated to be useful in visualizing bus-based and transmission line-based quantities.

  13. Modelling space-based integral-field spectrographs and their application to Type Ia supernova cosmology

    NASA Astrophysics Data System (ADS)

    Shukla, Hemant; Bonissent, Alain

    2017-04-01

    We present the parameterized simulation of an integral-field unit (IFU) slicer spectrograph and its applications in spectroscopic studies, namely, for probing dark energy with type Ia supernovae. The simulation suite is called the fast-slicer IFU simulator (FISim). The data flow of FISim realistically models the optics of the IFU along with the propagation effects, including cosmological, zodiacal, instrumentation and detector effects. FISim simulates the spectrum extraction by computing the error matrix on the extracted spectrum. The applications for Type Ia supernova spectroscopy are used to establish the efficacy of the simulator in exploring the wider parametric space, in order to optimize the science and mission requirements. The input spectral models utilize the observables such as the optical depth and velocity of the Si II absorption feature in the supernova spectrum as the measured parameters for various studies. Using FISim, we introduce a mechanism for preserving the complete state of a system, called the partial p/partial f matrix, which allows for compression, reconstruction and spectrum extraction, we introduce a novel and efficient method for spectrum extraction, called super-optimal spectrum extraction, and we conduct various studies such as the optimal point spread function, optimal resolution, parameter estimation, etc. We demonstrate that for space-based telescopes, the optimal resolution lies in the region near R ˜ 117 for read noise of 1 e- and 7 e- using a 400 km s-1 error threshold on the Si II velocity.

  14. Experimental study on behaviors of dielectric elastomer based on acrylonitrile butadiene rubber

    NASA Astrophysics Data System (ADS)

    An, Kuangjun; Chuc, Nguyen Huu; Kwon, Hyeok Yong; Phuc, Vuong Hong; Koo, Jachoon; Lee, Youngkwan; Nam, Jaedo; Choi, Hyouk Ryeol

    2010-04-01

    Previously, the dielectric elastomer based on Acrylonitrile Butadiene Rubber (NBR), called synthetic elastomer has been reported by our group. It has the advantages that its characteristics can be modified according to the requirements of performances, and thus, it is applicable to a wide variety of applications. In this paper, we address the effects of additives and vulcanization conditions on the overall performance of synthetic elastomer. In the present work, factors to have effects on the performances are extracted, e.g additives such as dioctyl phthalate (DOP), barium titanium dioxide (BaTiO3) and vulcanization conditions such as dicumyl peroxide (DCP), cross-linking times. Also, it is described how the performances can be optimized by using DOE (Design of Experiments) technique and experimental results are analyzed by ANOVA (Analysis of variance).

  15. An improved design method based on polyphase components for digital FIR filters

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No

    2017-11-01

    This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.

  16. 4Pi microscopy of the nuclear pore complex.

    PubMed

    Kahms, Martin; Hüve, Jana; Peters, Reiner

    2015-01-01

    4Pi microscopy is a far-field fluorescence microscopy technique, in which the wave fronts of two opposing illuminating beams are adjusted to constructively interfere in a common focus. This yields a diffraction pattern in the direction of the optical axis, which essentially consists of a main focal spot accompanied by two smaller side lobes. At optimal conditions, the main peak of this so-called point spread function has a full width at half maximum: fixed phrase of 100 nm in the direction of the optical axis, and thus is 6-7-fold smaller than that of a confocal microscope. In this chapter, we describe the basic features of 4Pi microscopy and its application to cell biology using the example of the nuclear pore complex, a large protein assembly spanning the nuclear envelope.

  17. Modelling, simulation and computer-aided design (CAD) of gyrotrons for novel applications in the high-power terahertz science and technologies

    NASA Astrophysics Data System (ADS)

    Sabchevski, S.; Idehara, T.; Damyanova, M.; Zhelyazkov, I.; Balabanova, E.; Vasileva, E.

    2018-03-01

    Gyrotrons are the most powerful sources of CW coherent radiation in the sub-THz and THz frequency bands. In recent years, they have demonstrated a remarkable potential for bridging the so-called THz-gap in the electromagnetic spectrum and opened the road to many novel applications of the terahertz waves. Among them are various advanced spectroscopic techniques (e.g., ESR and DNP-NMR), plasma physics and fusion research, materials processing and characterization, imaging and inspection, new medical technologies and biological studies. In this paper, we review briefly the current status of the research in this broad field and present our problem-oriented software packages developed recently for numerical analysis, computer-aided design (CAD) and optimization of gyrotrons.

  18. Enabling technologies and green processes in cyclodextrin chemistry

    PubMed Central

    Caporaso, Marina; Jicsinszky, Laszlo; Martina, Katia

    2016-01-01

    Summary The design of efficient synthetic green strategies for the selective modification of cyclodextrins (CDs) is still a challenging task. Outstanding results have been achieved in recent years by means of so-called enabling technologies, such as microwaves, ultrasound and ball mills, that have become irreplaceable tools in the synthesis of CD derivatives. Several examples of sonochemical selective modification of native α-, β- and γ-CDs have been reported including heterogeneous phase Pd- and Cu-catalysed hydrogenations and couplings. Microwave irradiation has emerged as the technique of choice for the production of highly substituted CD derivatives, CD grafted materials and polymers. Mechanochemical methods have successfully furnished greener, solvent-free syntheses and efficient complexation, while flow microreactors may well improve the repeatability and optimization of critical synthetic protocols. PMID:26977187

  19. Learning moment-based fast local binary descriptor

    NASA Astrophysics Data System (ADS)

    Bellarbi, Abdelkader; Zenati, Nadia; Otmane, Samir; Belghit, Hayet

    2017-03-01

    Recently, binary descriptors have attracted significant attention due to their speed and low memory consumption; however, using intensity differences to calculate the binary descriptive vector is not efficient enough. We propose an approach to binary description called POLAR_MOBIL, in which we perform binary tests between geometrical and statistical information using moments in the patch instead of the classical intensity binary test. In addition, we introduce a learning technique used to select an optimized set of binary tests with low correlation and high variance. This approach offers high distinctiveness against affine transformations and appearance changes. An extensive evaluation on well-known benchmark datasets reveals the robustness and the effectiveness of the proposed descriptor, as well as its good performance in terms of low computation complexity when compared with state-of-the-art real-time local descriptors.

  20. Interference Mitigation Effects on Synthetic Aperture Radar Coherent Data Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musgrove, Cameron

    2014-05-01

    For synthetic aperture radar image products interference can degrade the quality of the images while techniques to mitigate the interference also reduce the image quality. Usually the radar system designer will try to balance the amount of mitigation for the amount of interference to optimize the image quality. This may work well for many situations, but coherent data products derived from the image products are more sensitive than the human eye to distortions caused by interference and mitigation of interference. This dissertation examines the e ect that interference and mitigation of interference has upon coherent data products. An improvement tomore » the standard notch mitigation is introduced, called the equalization notch. Other methods are suggested to mitigation interference while improving the quality of coherent data products over existing methods.« less

  1. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic

    PubMed Central

    YOKOYAMA, Jun’ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231

  2. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    PubMed Central

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  3. A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Hezarkhani, Ardeshir

    2012-05-01

    The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called "Coactive Neuro-Fuzzy Inference System" (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) - as a well-known technique to solve the complex optimization problems - is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS-GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS-GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems.

  4. Magnetic headspace adsorptive extraction of chlorobenzenes prior to thermal desorption gas chromatography-mass spectrometry.

    PubMed

    Vidal, Lorena; Ahmadi, Mazaher; Fernández, Elena; Madrakian, Tayyebeh; Canals, Antonio

    2017-06-08

    This study presents a new, user-friendly, cost-effective and portable headspace solid-phase extraction technique based on graphene oxide decorated with iron oxide magnetic nanoparticles as sorbent, located on one end of a small neodymium magnet. Hence, the new headspace solid-phase extraction technique has been called Magnetic Headspace Adsorptive Extraction (Mag-HSAE). In order to assess Mag-HSAE technique applicability to model analytes, some chlorobenzenes were extracted from water samples prior to gas chromatography-mass spectrometry determination. A multivariate approach was employed to optimize the experimental parameters affecting Mag-HSAE. The method was evaluated under optimized extraction conditions (i.e., sample volume, 20 mL; extraction time, 30 min; sorbent amount, 10 mg; stirring speed, 1500 rpm, and ionic strength, non-significant), obtaining a linear response from 0.5 to 100 ng L -1 for 1,3-DCB, 1,4-DCB, 1,2-DCB, 1,3,5-TCB, 1,2,4-TCB and 1,2,3-TCB; from 0.5 to 75 ng L -1 for 1,2,4,5-TeCB, and PeCB; and from 1 to 75 ng L -1 for 1,2,3,4-TeCB. The repeatability of the proposed method was evaluated at 10 ng L -1 and 50 ng L -1 spiking levels, and coefficients of variation ranged between 1.5 and 9.5% (n = 5). Limits of detection values were found between 93 and 301 pg L -1 . Finally, tap, mineral and effluent water were selected as real water samples to assess method applicability. Relative recoveries varied between 86 and 110% showing negligible matrix effects. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation

    PubMed Central

    Tahmasebi, Pejman; Hezarkhani, Ardeshir

    2012-01-01

    The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called “Coactive Neuro-Fuzzy Inference System” (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) – as a well-known technique to solve the complex optimization problems – is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS–GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS–GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems. PMID:25540468

  6. Clustering methods for the optimization of atomic cluster structure

    NASA Astrophysics Data System (ADS)

    Bagattini, Francesco; Schoen, Fabio; Tigli, Luca

    2018-04-01

    In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.

  7. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1987-01-01

    Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.

  8. Fitting Prony Series To Data On Viscoelastic Materials

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1995-01-01

    Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.

  9. Communication and cooperation in underwater acoustic networks

    NASA Astrophysics Data System (ADS)

    Yerramalli, Srinivas

    In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.

  10. Muscle optimization techniques impact the magnitude of calculated hip joint contact forces.

    PubMed

    Wesseling, Mariska; Derikx, Loes C; de Groote, Friedl; Bartels, Ward; Meyer, Christophe; Verdonschot, Nico; Jonkers, Ilse

    2015-03-01

    In musculoskeletal modelling, several optimization techniques are used to calculate muscle forces, which strongly influence resultant hip contact forces (HCF). The goal of this study was to calculate muscle forces using four different optimization techniques, i.e., two different static optimization techniques, computed muscle control (CMC) and the physiological inverse approach (PIA). We investigated their subsequent effects on HCFs during gait and sit to stand and found that at the first peak in gait at 15-20% of the gait cycle, CMC calculated the highest HCFs (median 3.9 times peak GRF (pGRF)). When comparing calculated HCFs to experimental HCFs reported in literature, the former were up to 238% larger. Both static optimization techniques produced lower HCFs (median 3.0 and 3.1 pGRF), while PIA included muscle dynamics without an excessive increase in HCF (median 3.2 pGRF). The increased HCFs in CMC were potentially caused by higher muscle forces resulting from co-contraction of agonists and antagonists around the hip. Alternatively, these higher HCFs may be caused by the slightly poorer tracking of the net joint moment by the muscle moments calculated by CMC. We conclude that the use of different optimization techniques affects calculated HCFs, and static optimization approached experimental values best. © 2014 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kato, Kentaro

    An optimal quantum measurement is considered for the so-called quasi-Bell states under the quantum minimax criterion. It is shown that the minimax-optimal POVM for the quasi-Bell states is given by its square-root measurement and is applicable to the teleportation of a superposition of two coherent states.

  12. Convergent Evolution of Mechanically Optimal Locomotion in Aquatic Invertebrates and Vertebrates

    PubMed Central

    Bale, Rahul; Neveln, Izaak D.; Bhalla, Amneet Pal Singh

    2015-01-01

    Examples of animals evolving similar traits despite the absence of that trait in the last common ancestor, such as the wing and camera-type lens eye in vertebrates and invertebrates, are called cases of convergent evolution. Instances of convergent evolution of locomotory patterns that quantitatively agree with the mechanically optimal solution are very rare. Here, we show that, with respect to a very diverse group of aquatic animals, a mechanically optimal method of swimming with elongated fins has evolved independently at least eight times in both vertebrate and invertebrate swimmers across three different phyla. Specifically, if we take the length of an undulation along an animal’s fin during swimming and divide it by the mean amplitude of undulations along the fin length, the result is consistently around twenty. We call this value the optimal specific wavelength (OSW). We show that the OSW maximizes the force generated by the body, which also maximizes swimming speed. We hypothesize a mechanical basis for this optimality and suggest reasons for its repeated emergence through evolution. PMID:25919026

  13. Fog computing job scheduling optimization based on bees swarm

    NASA Astrophysics Data System (ADS)

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  14. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  15. Getting the Best out of Excel

    ERIC Educational Resources Information Center

    Heys, Chris

    2008-01-01

    Excel, Microsoft's spreadsheet program, offers several tools which have proven useful in solving some optimization problems that arise in operations research. We will look at two such tools, the Excel modules called Solver and Goal Seek--this after deriving an equation, called the "cash accumulation equation", to be used in conjunction with them.

  16. Expediting Scientific Data Analysis with Reorganization of Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Surendra; Wu, Kesheng

    2013-08-19

    Data producers typically optimize the layout of data files to minimize the write time. In most cases, data analysis tasks read these files in access patterns different from the write patterns causing poor read performance. In this paper, we introduce Scientific Data Services (SDS), a framework for bridging the performance gap between writing and reading scientific data. SDS reorganizes data to match the read patterns of analysis tasks and enables transparent data reads from the reorganized data. We implemented a HDF5 Virtual Object Layer (VOL) plugin to redirect the HDF5 dataset read calls to the reorganized data. To demonstrate themore » effectiveness of SDS, we applied two parallel data organization techniques: a sort-based organization on a plasma physics data and a transpose-based organization on mass spectrometry imaging data. We also extended the HDF5 data access API to allow selection of data based on their values through a query interface, called SDS Query. We evaluated the execution time in accessing various subsets of data through existing HDF5 Read API and SDS Query. We showed that reading the reorganized data using SDS is up to 55X faster than reading the original data.« less

  17. Multi-rendezvous low-thrust trajectory optimization using costate transforming and homotopic approach

    NASA Astrophysics Data System (ADS)

    Chen, Shiyu; Li, Haiyang; Baoyin, Hexi

    2018-06-01

    This paper investigates a method for optimizing multi-rendezvous low-thrust trajectories using indirect methods. An efficient technique, labeled costate transforming, is proposed to optimize multiple trajectory legs simultaneously rather than optimizing each trajectory leg individually. Complex inner-point constraints and a large number of free variables are one main challenge in optimizing multi-leg transfers via shooting algorithms. Such a difficulty is reduced by first optimizing each trajectory leg individually. The results may be, next, utilized as an initial guess in the simultaneous optimization of multiple trajectory legs. In this paper, the limitations of similar techniques in previous research is surpassed and a homotopic approach is employed to improve the convergence efficiency of the shooting process in multi-rendezvous low-thrust trajectory optimization. Numerical examples demonstrate that newly introduced techniques are valid and efficient.

  18. A High-Throughput Real-Time Imaging Technique To Quantify NETosis and Distinguish Mechanisms of Cell Death in Human Neutrophils.

    PubMed

    Gupta, Sarthak; Chan, Diana W; Zaal, Kristien J; Kaplan, Mariana J

    2018-01-15

    Neutrophils play a key role in host defenses and have recently been implicated in the pathogenesis of autoimmune diseases by various mechanisms, including formation of neutrophil extracellular traps through a recently described distinct form of programmed cell death called NETosis. Techniques to assess and quantitate NETosis in an unbiased, reproducible, and efficient way are lacking, considerably limiting the advancement of research in this field. We optimized and validated, a new method to automatically quantify the percentage of neutrophils undergoing NETosis in real time using the IncuCyte ZOOM imaging platform and the membrane-permeability properties of two DNA dyes. Neutrophils undergoing NETosis induced by various physiological stimuli showed distinct changes, with a loss of multilobulated nuclei, as well as nuclear decondensation followed by membrane compromise, and were accurately counted by applying filters based on fluorescence intensity and nuclear size. Findings were confirmed and validated with the established method of immunofluorescence microscopy. The platform was also validated to rapidly assess and quantify the dose-dependent effect of inhibitors of NETosis. In addition, this method was able to distinguish among neutrophils undergoing NETosis, apoptosis, or necrosis based on distinct changes in nuclear morphology and membrane integrity. The IncuCyte ZOOM platform is a novel real-time assay that quantifies NETosis in a rapid, automated, and reproducible way, significantly optimizing the study of neutrophils. This platform is a powerful tool to assess neutrophil physiology and NETosis, as well as to swiftly develop and test novel neutrophil targets.

  19. Entropic One-Class Classifiers.

    PubMed

    Livi, Lorenzo; Sadeghian, Alireza; Pedrycz, Witold

    2015-12-01

    The one-class classification problem is a well-known research endeavor in pattern recognition. The problem is also known under different names, such as outlier and novelty/anomaly detection. The core of the problem consists in modeling and recognizing patterns belonging only to a so-called target class. All other patterns are termed nontarget, and therefore, they should be recognized as such. In this paper, we propose a novel one-class classification system that is based on an interplay of different techniques. Primarily, we follow a dissimilarity representation-based approach; we embed the input data into the dissimilarity space (DS) by means of an appropriate parametric dissimilarity measure. This step allows us to process virtually any type of data. The dissimilarity vectors are then represented by weighted Euclidean graphs, which we use to determine the entropy of the data distribution in the DS and at the same time to derive effective decision regions that are modeled as clusters of vertices. Since the dissimilarity measure for the input data is parametric, we optimize its parameters by means of a global optimization scheme, which considers both mesoscopic and structural characteristics of the data represented through the graphs. The proposed one-class classifier is designed to provide both hard (Boolean) and soft decisions about the recognition of test patterns, allowing an accurate description of the classification process. We evaluate the performance of the system on different benchmarking data sets, containing either feature-based or structured patterns. Experimental results demonstrate the effectiveness of the proposed technique.

  20. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  1. Engineering electromagnetic metamaterials and methanol fuel cells

    NASA Astrophysics Data System (ADS)

    Yen, Tajen

    2005-07-01

    Electromagnetic metamaterials represent a group of artificial structures, whose dimensions are smaller than subwavelength. Due to electromagnetic metamaterials' collective response to the applied fields, they can exhibit unprecedented properties to fascinate researchers' eyes. For instance, artificial magnetism above terahertz frequencies and beyond, negative magnetic response, and artificial plasma lower than ultraviolet and visible frequencies. Our goal is to engineer those novel properties aforementioned at interested frequency regions and further optimize their performance. To fulfill this task, we developed exclusive micro/nano fabrication techniques to construct magnetic metamaterials (i.e., split-ring resonators and L-shaped resonators) and electric metamaterials (i.e., plasmonic wires) and also employed Taguchi method to study the optimal design of electromagnetic metamaterials. Moreover, by integrating magnetic and electric metamaterials, we have been pursuing to fabricate so-called negative index media---the Holy Grail enables not only to reverse conventional optical rules such as Snell's law, Doppler shift, and Cerenkov radiation, but also to smash the diffraction limit to realize the superlensing effect. In addition to electromagnetic metamaterials, in this dissertation we also successfully miniaturize silicon-based methanol fuel cells by means of micro-electrical-mechanical-system technique, which promise to provide an integrated micro power source with excellent performance. Our demonstrated power density and energy density are one of the highest in reported documents. Finally, based on the results of metamaterials and micro fuel cells, we intend to supply building blocks to complete an omnipotent device---a system with sensing, communication, computing, power, control, and actuation functions.

  2. SU-F-T-201: Acceleration of Dose Optimization Process Using Dual-Loop Optimization Technique for Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujimoto, R

    Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less

  3. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  4. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  5. Generalized SMO algorithm for SVM-based multitask learning.

    PubMed

    Cai, Feng; Cherkassky, Vladimir

    2012-06-01

    Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.

  6. Isothermal DNA origami folding: avoiding denaturing conditions for one-pot, hybrid-component annealing

    NASA Astrophysics Data System (ADS)

    Kopielski, Andreas; Schneider, Anne; Csáki, Andrea; Fritzsche, Wolfgang

    2015-01-01

    The DNA origami technique offers great potential for nanotechnology. Using biomolecular self-assembly, defined 2D and 3D nanoscale DNA structures can be realized. DNA origami allows the positioning of proteins, fluorophores or nanoparticles with an accuracy of a few nanometers and enables thereby novel nanoscale devices. Origami assembly usually includes a thermal denaturation step at 90 °C. Additional components used for nanoscale assembly (such as proteins) are often thermosensitive, and possibly damaged by such harsh conditions. They have therefore to be attached in an extra second step to avoid defects. To enable a streamlined one-step nanoscale synthesis - a so called one-pot folding - an adaptation of the folding procedures is required. Here we present a thermal optimization of this process for a 2D DNA rectangle-shaped origami resulting in an isothermal assembly protocol below 60 °C without thermal denaturation. Moreover, a room temperature protocol is presented using the chemical additive betaine, which is biocompatible in contrast to chemical denaturing approaches reported previously.The DNA origami technique offers great potential for nanotechnology. Using biomolecular self-assembly, defined 2D and 3D nanoscale DNA structures can be realized. DNA origami allows the positioning of proteins, fluorophores or nanoparticles with an accuracy of a few nanometers and enables thereby novel nanoscale devices. Origami assembly usually includes a thermal denaturation step at 90 °C. Additional components used for nanoscale assembly (such as proteins) are often thermosensitive, and possibly damaged by such harsh conditions. They have therefore to be attached in an extra second step to avoid defects. To enable a streamlined one-step nanoscale synthesis - a so called one-pot folding - an adaptation of the folding procedures is required. Here we present a thermal optimization of this process for a 2D DNA rectangle-shaped origami resulting in an isothermal assembly protocol below 60 °C without thermal denaturation. Moreover, a room temperature protocol is presented using the chemical additive betaine, which is biocompatible in contrast to chemical denaturing approaches reported previously. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr04176c

  7. Measurement of Interfacial Profiles of Wavy Film Flow on Inclined Wall

    NASA Astrophysics Data System (ADS)

    Rosli, N.; Amagai, K.

    2016-02-01

    Falling liquid films on inclined wall present in many industrial processes such as in food processing, seawater desalination and electronic devices manufacturing industries. In order to ensure an optimal efficiency of the operation in these industries, a fundamental study on the interfacial flow profiles of the liquid film is of great importance. However, it is generally difficult to experimentally predict the interfacial profiles of liquid film flow on inclined wall due to the instable wavy flow that usually formed on the liquid film surface. In this paper, the liquid film surface velocity was measured by using a non-intrusive technique called as photochromic dye marking method. This technique utilizes the color change of liquid containing the photochromic dye when exposed to the UV light source. The movement of liquid film surface marked by the UV light was analyzed together with the wave passing over the liquid. As a result, the liquid film surface was found to slightly shrink its gradual movement when approached by the wave before gradually move again after the intersection with the wave.

  8. Biomimetic approaches to modulate cellular adhesion in biomaterials: A review.

    PubMed

    Rahmany, Maria B; Van Dyke, Mark

    2013-03-01

    Natural extracellular matrix (ECM) proteins possess critical biological characteristics that provide a platform for cellular adhesion and activation of highly regulated signaling pathways. However, ECM-based biomaterials can have several limitations, including poor mechanical properties and risk of immunogenicity. Synthetic biomaterials alleviate the risks associated with natural biomaterials but often lack the robust biological activity necessary to direct cell function beyond initial adhesion. A thorough understanding of receptor-mediated cellular adhesion to the ECM and subsequent signaling activation has facilitated development of techniques that functionalize inert biomaterials to provide a biologically active surface. Here we review a range of approaches used to modify biomaterial surfaces for optimal receptor-mediated cell interactions, as well as provide insights into specific mechanisms of downstream signaling activation. In addition to a brief overview of integrin receptor-mediated cell function, so-called "biomimetic" techniques reviewed here include (i) surface modification of biomaterials with bioadhesive ECM macromolecules or specific binding motifs, (ii) nanoscale patterning of the materials and (iii) the use of "natural-like" biomaterials. Copyright © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  9. An analytical study of reduced-gravity liquid reorientation using a simplified marker and cell technique

    NASA Technical Reports Server (NTRS)

    Betts, W. S., Jr.

    1972-01-01

    A computer program called HOPI was developed to predict reorientation flow dynamics, wherein liquids move from one end of a closed, partially filled, rigid container to the other end under the influence of container acceleration. The program uses the simplified marker and cell numerical technique and, using explicit finite-differencing, solves the Navier-Stokes equations for an incompressible viscous fluid. The effects of turbulence are also simulated in the program. HOPI can consider curved as well as straight walled boundaries. Both free-surface and confined flows can be calculated. The program was used to simulate five liquid reorientation cases. Three of these cases simulated actual NASA LeRC drop tower test conditions while two cases simulated full-scale Centaur tank conditions. It was concluded that while HOPI can be used to analytically determine the fluid motion in a typical settling problem, there is a current need to optimize HOPI. This includes both reducing the computer usage time and also reducing the core storage required for a given size problem.

  10. Stability characterization of two multi-channel GPS receivers for accurate frequency transfer.

    NASA Astrophysics Data System (ADS)

    Taris, F.; Uhrich, P.; Thomas, C.; Petit, G.; Jiang, Z.

    In recent years, wide-spread use of the GPS common-view technique has led to major improvements, making it possible to compare remote clocks at their full level of performance. For integration times of 1 to 3 days, their frequency differences are consistently measured to about one part in 1014. Recent developments in atomic frequency standards suggest, however, that this performance may no longer be sufficient. The caesium fountain LPTF FO1, built at the BNM-LPTF, Paris, France, shows a short-term white frequency noise characterized by an Allen deviation σy(τ = 1 s) = 5×10-14 and a type B uncertainty of 2×10-15. To compare the frequencies of such highly stable standards would call for GPS common-view results to be averaged over times far exceeding the intervals of their optimal performance. Previous studies have shown the potential of carrier-phase and code measurements from geodetic GPS receivers for clock frequency comparisons. The experiment related here is an attempt to see the stability limit that could be reached using this technique.

  11. Software-based stacking techniques to enhance depth of field and dynamic range in digital photomicrography.

    PubMed

    Piper, Jörg

    2010-01-01

    Several software solutions are powerful tools to enhance the depth of field and improve focus in digital photomicrography. By these means, the focal depth can be fundamentally optimized so that three-dimensional structures within specimens can be documented with superior quality. Thus, images can be created in light microscopy which will be comparable with scanning electron micrographs. The remaining sharpness will no longer be dependent on the specimen's vertical dimension or its range in regional thickness. Moreover, any potential lack of definition associated with loss of planarity and unsteadiness in the visual accommodation can be mitigated or eliminated so that the contour sharpness and resolution can be strongly enhanced.Through the use of complementary software, ultrahigh ranges in brightness and contrast (the so-called high-dynamic range) can be corrected so that the final images will also be free from locally over- or underexposed zones. Furthermore, fine detail in low natural contrast can be visualized in much higher clarity. Fundamental enhancements of the global visual information will result from both techniques.

  12. Performance Analysis of Physical Layer Security of Opportunistic Scheduling in Multiuser Multirelay Cooperative Networks

    PubMed Central

    Shim, Kyusung; Do, Nhu Tri; An, Beongku

    2017-01-01

    In this paper, we study the physical layer security (PLS) of opportunistic scheduling for uplink scenarios of multiuser multirelay cooperative networks. To this end, we propose a low-complexity, yet comparable secrecy performance source relay selection scheme, called the proposed source relay selection (PSRS) scheme. Specifically, the PSRS scheme first selects the least vulnerable source and then selects the relay that maximizes the system secrecy capacity for the given selected source. Additionally, the maximal ratio combining (MRC) technique and the selection combining (SC) technique are considered at the eavesdropper, respectively. Investigating the system performance in terms of secrecy outage probability (SOP), closed-form expressions of the SOP are derived. The developed analysis is corroborated through Monte Carlo simulation. Numerical results show that the PSRS scheme significantly improves the secure ability of the system compared to that of the random source relay selection scheme, but does not outperform the optimal joint source relay selection (OJSRS) scheme. However, the PSRS scheme drastically reduces the required amount of channel state information (CSI) estimations compared to that required by the OJSRS scheme, specially in dense cooperative networks. PMID:28212286

  13. Optimization of fertirrigation efficiency in strawberry crops by application of fuzzy logic techniques.

    PubMed

    de la Torre, M L; Grande, J A; Aroba, J; Andujar, J M

    2005-11-01

    A high level of price support has favoured intensive agriculture and an increasing use of fertilisers and pesticides. This has resulted in the pollution of water and soils and damage to certain eco-systems. The target relationship that must be established between agriculture and environment can be called "sustainable agriculture". In this work we aim at relating strawberry total yield with nitrate concentration in water at different soil depths. To achieve this objective, we have used the Predictive Fuzzy Rules Generator (PreFuRGe) tool, based on fuzzy logic and data mining, by means of which the dose that allows a balance between yield and environmental damage minimization can be determined. This determination is quite simple and is done directly from the obtained charts. This technique can be used in other types of crops permitting one to determine in a precise way at which depth the appropriate dose of nitrate fertilizer must be correctly applied, on the one hand providing the maximum yield but, on the other hand, with the minimum loss of nitrates that leachate through the saturated zone polluting aquifers.

  14. RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.

  15. Cooperative quantum-behaved particle swarm optimization with dynamic varying search areas and Lévy flight disturbance.

    PubMed

    Li, Desheng

    2014-01-01

    This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem.

  16. Reconstructing the Sky Location of Gravitational-Wave Detected Compact Binary Systems: Methodology for Testing and Comparison

    NASA Technical Reports Server (NTRS)

    Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; hide

    2014-01-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.

  17. Reconstructing the sky location of gravitational-wave detected compact binary systems: Methodology for testing and comparison

    NASA Astrophysics Data System (ADS)

    Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.

    2014-04-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.

  18. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  19. Fuzzy Controller Design Using Evolutionary Techniques for Twin Rotor MIMO System: A Comparative Study.

    PubMed

    Hashim, H A; Abido, M A

    2015-01-01

    This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed.

  20. Fuzzy Controller Design Using Evolutionary Techniques for Twin Rotor MIMO System: A Comparative Study

    PubMed Central

    Hashim, H. A.; Abido, M. A.

    2015-01-01

    This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738

  1. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  2. Mission-based Scenario Research: Experimental Design And Analysis

    DTIC Science & Technology

    2012-01-01

    neurotechnologies called Brain-Computer Interaction Technologies. 15. SUBJECT TERMS neuroimaging, EEG, task loading, neurotechnologies , ground... neurotechnologies called Brain-Computer Interaction Technologies. INTRODUCTION Imagine a system that can identify operator fatigue during a long-term...BCIT), a class of neurotechnologies , that aim to improve task performance by incorporating measures of brain activity to optimize the interactions

  3. Research on Assessment of Life Satisfaction of Children and Adolescents

    ERIC Educational Resources Information Center

    Huebner, E. Scott

    2004-01-01

    Over the years, various psychologists have issued calls for greater attention to a science of positive psychology, which focuses on studying conditions that promote optimal human and societal development. Recent calls (e.g., McCullough and Snyder, 2000; Seligman and Csikszentmihalyi, 2000) have furthered interest in studies of the nature and…

  4. Cost-sensitive case-based reasoning using a genetic algorithm: application to medical diagnosis.

    PubMed

    Park, Yoon-Joo; Chun, Se-Hak; Kim, Byung-Chun

    2011-02-01

    The paper studies the new learning technique called cost-sensitive case-based reasoning (CSCBR) incorporating unequal misclassification cost into CBR model. Conventional CBR is now considered as a suitable technique for diagnosis, prognosis and prescription in medicine. However it lacks the ability to reflect asymmetric misclassification and often assumes that the cost of a positive diagnosis (an illness) as a negative one (no illness) is the same with that of the opposite situation. Thus, the objective of this research is to overcome the limitation of conventional CBR and encourage applying CBR to many real world medical cases associated with costs of asymmetric misclassification errors. The main idea involves adjusting the optimal cut-off classification point for classifying the absence or presence of diseases and the cut-off distance point for selecting optimal neighbors within search spaces based on similarity distribution. These steps are dynamically adapted to new target cases using a genetic algorithm. We apply this proposed method to five real medical datasets and compare the results with two other cost-sensitive learning methods-C5.0 and CART. Our finding shows that the total misclassification cost of CSCBR is lower than other cost-sensitive methods in many cases. Even though the genetic algorithm has limitations in terms of unstable results and over-fitting training data, CSCBR results with GA are better overall than those of other methods. Also the paired t-test results indicate that the total misclassification cost of CSCBR is significantly less than C5.0 and CART for several datasets. We have proposed a new CBR method called cost-sensitive case-based reasoning (CSCBR) that can incorporate unequal misclassification costs into CBR and optimize the number of neighbors dynamically using a genetic algorithm. It is meaningful not only for introducing the concept of cost-sensitive learning to CBR, but also for encouraging the use of CBR in the medical area. The result shows that the total misclassification costs of CSCBR do not increase in arithmetic progression as the cost of false absence increases arithmetically, thus it is cost-sensitive. We also show that total misclassification costs of CSCBR are the lowest among all methods in four datasets out of five and the result is statistically significant in many cases. The limitation of our proposed CSCBR is confined to classify binary cases for minimizing misclassification cost because our proposed CSCBR is originally designed to classify binary case. Our future work extends this method for multi-classification which can classify more than two groups. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. A Swarm Optimization approach for clinical knowledge mining.

    PubMed

    Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A

    2015-10-01

    Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Covert Channels in SIP for VoIP Signalling

    NASA Astrophysics Data System (ADS)

    Mazurczyk, Wojciech; Szczypiorski, Krzysztof

    In this paper, we evaluate available steganographic techniques for SIP (Session Initiation Protocol) that can be used for creating covert channels during signaling phase of VoIP (Voice over IP) call. Apart from characterizing existing steganographic methods we provide new insights by introducing new techniques. We also estimate amount of data that can be transferred in signalling messages for typical IP telephony call.

  7. The ground truth about metadata and community detection in networks

    PubMed Central

    Peel, Leto; Larremore, Daniel B.; Clauset, Aaron

    2017-01-01

    Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system’s components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks’ links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures. PMID:28508065

  8. High-Order Space-Time Methods for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2013-01-01

    Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown

  9. Trajectory Optimization: OTIS 4

    NASA Technical Reports Server (NTRS)

    Riehl, John P.; Sjauw, Waldy K.; Falck, Robert D.; Paris, Stephen W.

    2010-01-01

    The latest release of the Optimal Trajectories by Implicit Simulation (OTIS4) allows users to simulate and optimize aerospace vehicle trajectories. With OTIS4, one can seamlessly generate optimal trajectories and parametric vehicle designs simultaneously. New features also allow OTIS4 to solve non-aerospace continuous time optimal control problems. The inputs and outputs of OTIS4 have been updated extensively from previous versions. Inputs now make use of objectoriented constructs, including one called a metastring. Metastrings use a greatly improved calculator and common nomenclature to reduce the user s workload. They allow for more flexibility in specifying vehicle physical models, boundary conditions, and path constraints. The OTIS4 calculator supports common mathematical functions, Boolean operations, and conditional statements. This allows users to define their own variables for use as outputs, constraints, or objective functions. The user-defined outputs can directly interface with other programs, such as spreadsheets, plotting packages, and visualization programs. Internally, OTIS4 has more explicit and implicit integration procedures, including high-order collocation methods, the pseudo-spectral method, and several variations of multiple shooting. Users may switch easily between the various methods. Several unique numerical techniques such as automated variable scaling and implicit integration grid refinement, support the integration methods. OTIS4 is also significantly more user friendly than previous versions. The installation process is nearly identical on various platforms, including Microsoft Windows, Apple OS X, and Linux operating systems. Cross-platform scripts also help make the execution of OTIS and post-processing of data easier. OTIS4 is supplied free by NASA and is subject to ITAR (International Traffic in Arms Regulations) restrictions. Users must have a Fortran compiler, and a Python interpreter is highly recommended.

  10. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  11. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    NASA Astrophysics Data System (ADS)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically highlight the consideration of conceptual model uncertainty.

  12. Metamodeling and optimization of the THF process with pulsating pressure

    NASA Astrophysics Data System (ADS)

    Bucconi, Marco; Strano, Matteo

    2018-05-01

    Tube hydroforming is a process used in various applications to form the tube in a desired complex shape, by combining the use of internal pressure, which provides the required stress to yield the material, and axial feeding, which helps the material to flow towards the bulging zone. In many studies it has been demonstrated how wrinkling and bursting defects can be severely reduced by means of a pulsating pressure, and how the so-called hammering hydroforming enhances the formability of the material. The definition of the optimum pressure and axial feeding profiles represent a daunting challenge in the designing phase of the hydroforming operation of a new part. The quality of the formed part is highly dependent on the amplitude and the peak value of the pulsating pressure, along with the axial stroke. In this paper, a research is reported, conducted by means of explicit finite element simulations of a hammering THF operation and metamodeling techniques aimed at optimizing the process parameters for the production of a complex part. The improved formability is explored for different factors and an optimization strategy is used to determine the most convenient pressure and axial feed profile curves for the hammering THF process of the examined part. It is shown how the pulsating pressure allows the minimization of the energy input in the process, still respecting final quality requirements.

  13. An Improved Evolutionary Programming with Voting and Elitist Dispersal Scheme

    NASA Astrophysics Data System (ADS)

    Maity, Sayan; Gunjan, Kumar; Das, Swagatam

    Although initially conceived for evolving finite state machines, Evolutionary Programming (EP), in its present form, is largely used as a powerful real parameter optimizer. For function optimization, EP mainly relies on its mutation operators. Over past few years several mutation operators have been proposed to improve the performance of EP on a wide variety of numerical benchmarks. However, unlike real-coded GAs, there has been no fitness-induced bias in parent selection for mutation in EP. That means the i-th population member is selected deterministically for mutation and creation of the i-th offspring in each generation. In this article we present an improved EP variant called Evolutionary Programming with Voting and Elitist Dispersal (EPVE). The scheme encompasses a voting process which not only gives importance to best solutions but also consider those solutions which are converging fast. By introducing Elitist Dispersal Scheme we maintain the elitism by keeping the potential solutions intact and other solutions are perturbed accordingly, so that those come out of the local minima. By applying these two techniques we can be able to explore those regions which have not been explored so far that may contain optima. Comparison with the recent and best-known versions of EP over 25 benchmark functions from the CEC (Congress on Evolutionary Computation) 2005 test-suite for real parameter optimization reflects the superiority of the new scheme in terms of final accuracy, speed, and robustness.

  14. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  15. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  16. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass- Or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood-allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  17. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass-or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood---allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  18. 77 FR 56710 - Proposed Information Collection (Call Center Satisfaction Survey): Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-13

    ... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0744] Proposed Information Collection (Call Center Satisfaction Survey): Comment Request AGENCY: Veterans Benefits Administration, Department of... techniques or the use of other forms of information technology. Title: VBA Call Center Satisfaction Survey...

  19. Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques

    DTIC Science & Technology

    2017-11-01

    ARL-TR-8225 ● NOV 2017 US Army Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based...Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques by...SUBTITLE Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques 5a. CONTRACT NUMBER

  20. Research on an augmented Lagrangian penalty function algorithm for nonlinear programming

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1978-01-01

    The augmented Lagrangian (ALAG) Penalty Function Algorithm for optimizing nonlinear mathematical models is discussed. The mathematical models of interest are deterministic in nature and finite dimensional optimization is assumed. A detailed review of penalty function techniques in general and the ALAG technique in particular is presented. Numerical experiments are conducted utilizing a number of nonlinear optimization problems to identify an efficient ALAG Penalty Function Technique for computer implementation.

  1. Design of Multishell Sampling Schemes with Uniform Coverage in Diffusion MRI

    PubMed Central

    Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid

    2017-01-01

    Purpose In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. Methods The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. Results We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. Discussion We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. PMID:23625329

  2. Application of Interval Predictor Models to Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

    2016-01-01

    This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

  3. Stress in Harmonic Serialism

    ERIC Educational Resources Information Center

    Pruitt, Kathryn Ringler

    2012-01-01

    This dissertation proposes a model of word stress in a derivational version of Optimality Theory (OT) called Harmonic Serialism (HS; Prince and Smolensky 1993/2004, McCarthy 2000, 2006, 2010a). In this model, the metrical structure of a word is derived through a series of optimizations in which the "best" metrical foot is chosen…

  4. Three-Dimensional Microwave Hyperthermia for Breast Cancer Treatment in a Realistic Environment Using Particle Swarm Optimization.

    PubMed

    Nguyen, Phong Thanh; Abbosh, Amin; Crozier, Stuart

    2017-06-01

    In this paper, a technique for noninvasive microwave hyperthermia treatment for breast cancer is presented. In the proposed technique, microwave hyperthermia of patient-specific breast models is implemented using a three-dimensional (3-D) antenna array based on differential beam-steering subarrays to locally raise the temperature of the tumor to therapeutic values while keeping healthy tissue at normal body temperature. This approach is realized by optimizing the excitations (phases and amplitudes) of the antenna elements using the global optimization method particle swarm optimization. The antennae excitation phases are optimized to maximize the power at the tumor, whereas the amplitudes are optimized to accomplish the required temperature at the tumor. During the optimization, the technique ensures that no hotspots exist in healthy tissue. To implement the technique, a combination of linked electromagnetic and thermal analyses using MATLAB and the full-wave electromagnetic simulator is conducted. The technique is tested at 4.2 GHz, which is a compromise between the required power penetration and focusing, in a realistic simulation environment, which is built using a 3-D antenna array of 4 × 6 unidirectional antenna elements. The presented results on very dense 3-D breast models, which have the realistic dielectric and thermal properties, validate the capability of the proposed technique in focusing power at the exact location and volume of tumor even in the challenging cases where tumors are embedded in glands. Moreover, the models indicate the capability of the technique in dealing with tumors at different on- and off-axis locations within the breast with high efficiency in using the microwave power.

  5. Artificial Neural Identification and LMI Transformation for Model Reduction-Based Control of the Buck Switch-Mode Regulator

    NASA Astrophysics Data System (ADS)

    Al-Rabadi, Anas N.

    2009-10-01

    This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.

  6. Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.

    PubMed

    Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq

    2016-01-01

    This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.

  7. Simulation to Support Local Search in Trajectory Optimization Planning

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.; Venable, K. Brent; Lindsey, James

    2012-01-01

    NASA and the international community are investing in the development of a commercial transportation infrastructure that includes the increased use of rotorcraft, specifically helicopters and civil tilt rotors. However, there is significant concern over the impact of noise on the communities surrounding the transportation facilities. One way to address the rotorcraft noise problem is by exploiting powerful search techniques coming from artificial intelligence coupled with simulation and field tests to design low-noise flight profiles which can be tested in simulation or through field tests. This paper investigates the use of simulation based on predictive physical models to facilitate the search for low-noise trajectories using a class of automated search algorithms called local search. A novel feature of this approach is the ability to incorporate constraints directly into the problem formulation that addresses passenger safety and comfort.

  8. Functional Near Infrared Spectroscopy: Watching the Brain in Flight

    NASA Technical Reports Server (NTRS)

    Harrivel, Angela; Hearn, Tristan

    2012-01-01

    Functional Near Infrared Spectroscopy (fNIRS) is an emerging neurological sensing technique applicable to optimizing human performance in transportation operations, such as commercial aviation. Cognitive state can be determined via pattern classification of functional activations measured with fNIRS. Operational application calls for further development of algorithms and filters for dynamic artifact removal. The concept of using the frequency domain phase shift signal to tune a Kalman filter is introduced to improve the quality of fNIRS signals in realtime. Hemoglobin concentration and phase shift traces were simulated for four different types of motion artifact to demonstrate the filter. Unwanted signal was reduced by at least 43%, and the contrast of the filtered oxygenated hemoglobin signal was increased by more than 100% overall. This filtering method is a good candidate for qualifying fNIRS signals in real time without auxiliary sensors

  9. Rapid Design of Gravity Assist Trajectories

    NASA Technical Reports Server (NTRS)

    Carrico, J.; Hooper, H. L.; Roszman, L.; Gramling, C.

    1991-01-01

    Several International Solar Terrestrial Physics (ISTP) missions require the design of complex gravity assisted trajectories in order to investigate the interaction of the solar wind with the Earth's magnetic field. These trajectories present a formidable trajectory design and optimization problem. The philosophy and methodology that enable an analyst to design and analyse such trajectories are discussed. The so called 'floating end point' targeting, which allows the inherently nonlinear multiple body problem to be solved with simple linear techniques, is described. The combination of floating end point targeting with analytic approximations with a Newton method targeter to achieve trajectory design goals quickly, even for the very sensitive double lunar swingby trajectories used by the ISTP missions, is demonstrated. A multiconic orbit integration scheme allows fast and accurate orbit propagation. A prototype software tool, Swingby, built for trajectory design and launch window analysis, is described.

  10. Functional Near Infrared Spectroscopy: Watching the Brain in Flight

    NASA Technical Reports Server (NTRS)

    Harrivel, Angela; Hearn, Tristan A.

    2012-01-01

    Functional Near Infrared Spectroscopy (fNIRS) is an emerging neurological sensing technique applicable to optimizing human performance in transportation operations, such as commercial aviation. Cognitive state can be determined via pattern classification of functional activations measured with fNIRS. Operational application calls for further development of algorithms and filters for dynamic artifact removal. The concept of using the frequency domain phase shift signal to tune a Kalman filter is introduced to improve the quality of fNIRS signals in real-time. Hemoglobin concentration and phase shift traces were simulated for four different types of motion artifact to demonstrate the filter. Unwanted signal was reduced by at least 43%, and the contrast of the filtered oxygenated hemoglobin signal was increased by more than 100% overall. This filtering method is a good candidate for qualifying fNIRS signals in real time without auxiliary sensors.

  11. Aircraft Trajectories Computation-Prediction-Control. Volume 1 (La Trajectoire de l’Avion Calcul-Prediction-Controle)

    DTIC Science & Technology

    1990-03-01

    knowledge covering problems of this type is called calculus of variations or optimal control theory (Refs. 1-8). As stated before, appli - cations occur...to the optimality conditions and the feasibility equations of Problem (GP), respectively. Clearly, after the transformation (26) is applied , the...trajectories, the primal sequential gradient-restoration algorithm (PSGRA) is applied to compute optimal trajectories for aeroassisted orbital transfer

  12. Group Counseling Optimization: A Novel Approach

    NASA Astrophysics Data System (ADS)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  13. Theoretical Foundation of Copernicus: A Unified System for Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Ocampo, Cesar; Senent, Juan S.; Williams, Jacob

    2010-01-01

    The fundamental methods are described for the general spacecraft trajectory design and optimization software system called Copernicus. The methods rely on a unified framework that is used to model, design, and optimize spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The trajectory model, with its associated equations of motion and maneuver models, are discussed.

  14. The integrated manual and automatic control of complex flight systems

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.

    1985-01-01

    Pilot/vehicle analysis techniques for optimizing aircraft handling qualities are presented. The analysis approach considered is based on the optimal control frequency domain techniques. These techniques stem from an optimal control approach of a Neal-Smith like analysis on aircraft attitude dynamics extended to analyze the flared landing task. Some modifications to the technique are suggested and discussed. An in depth analysis of the effect of the experimental variables, such as prefilter, is conducted to gain further insight into the flared land task for this class of vehicle dynamics.

  15. Optimization of dual energy contrast enhanced breast tomosynthesis for improved mammographic lesion detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Saunders, R.; Samei, E.; Badea, C.; Yuan, H.; Ghaghada, K.; Qi, Y.; Hedlund, L. W.; Mukundan, S.

    2008-03-01

    Dual-energy contrast-enhanced breast tomosynthesis has been proposed as a technique to improve the detection of early-stage cancer in young, high-risk women. This study focused on optimizing this technique using computer simulations. The computer simulation used analytical calculations to optimize the signal difference to noise ratio (SdNR) of resulting images from such a technique at constant dose. The optimization included the optimal radiographic technique, optimal distribution of dose between the two single-energy projection images, and the optimal weighting factor for the dual energy subtraction. Importantly, the SdNR included both anatomical and quantum noise sources, as dual energy imaging reduces anatomical noise at the expense of increases in quantum noise. Assuming a tungsten anode, the maximum SdNR at constant dose was achieved for a high energy beam at 49 kVp with 92.5 μm copper filtration and a low energy beam at 49 kVp with 95 μm tin filtration. These analytical calculations were followed by Monte Carlo simulations that included the effects of scattered radiation and detector properties. Finally, the feasibility of this technique was tested in a small animal imaging experiment using a novel iodinated liposomal contrast agent. The results illustrated the utility of dual energy imaging and determined the optimal acquisition parameters for this technique. This work was supported in part by grants from the Komen Foundation (PDF55806), the Cancer Research and Prevention Foundation, and the NIH (NCI R21 CA124584-01). CIVM is a NCRR/NCI National Resource under P41-05959/U24-CA092656.

  16. Optimal time-domain technique for pulse width modulation in power electronics

    NASA Astrophysics Data System (ADS)

    Mayergoyz, I.; Tyagi, S.

    2018-05-01

    Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.

  17. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  18. Oxygen effects on senescence in chondrocytes and mesenchymal stem cells: consequences for tissue engineering.

    PubMed

    Moussavi-Harami, Farid; Duwayri, Yazan; Martin, James A; Moussavi-Harami, Farshid; Buckwalter, Joseph A

    2004-01-01

    Primary isolates of chondrocytes and mesenchymal stem cells are often insufficient for cell-based autologous grafting procedures, necessitating in vitro expansion of cell populations. However, the potential for expansion is limited by cellular senescence, a form of irreversible cell cycle arrest regulated by intrinsic and extrinsic factors. Intrinsic mechanisms common to most somatic cells enforce senescence at the so-called "Hayflick limit" of 60 population doublings. Termed "replicative senescence", this mechanism prevents cellular immortalization and suppresses oncogenesis. Although it is possible to overcome the Hayflick limit by genetically modifying cells, such manipulations are regarded as prohibitively dangerous in the context of tissue engineering. On the other hand, senescence associated with extrinsic factors, often called "stress-induced" senescence, can be avoided simply by modifying culture conditions. Because stress-induced senescence is "premature" in the sense that it can halt growth well before the Hayflick limit is reached, growth potential can be significantly enhanced by minimizing culture related stress. Standard culture techniques were originally developed to optimize the growth of fibroblasts but these conditions are inherently stressful to many other cell types. In particular, the 21% oxygen levels used in standard incubators, though well tolerated by fibroblasts, appear to induce oxidative stress in other cells. We reasoned that chondrocytes and MSCs, which are adapted to relatively low oxygen levels in vivo, might be sensitive to this form of stress. To test this hypothesis we compared the growth of MSC and chondrocyte strains in 21% and 5% oxygen. We found that incubation in 21% oxygen significantly attenuated growth and was associated with increased oxidant production. These findings indicated that sub-optimal standard culture conditions sharply limited the expansion of MSC and chondrocyte populations and suggest that cultures for grafting purposes should be maintained in a low-oxygen environment.

  19. Oxygen Effects on Senescence in Chondrocytes and Mesenchymal Stem Cells: Consequences for Tissue Engineering

    PubMed Central

    Moussavi-Harami, Farid; Duwayri, Yazan; Martin, James A; Moussavi-Harami, Farshid; Buckwalter, Joseph A

    2004-01-01

    Primary isolates of chondrocytes and mesenchymal stem cells are often insufficient for cell-based autologous grafting procedures, necessitating in vitro expansion of cell populations. However, the potential for expansion is limited by cellular senescence, a form of irreversible cell cycle arrest regulated by intrinsic and extrinsic factors. Intrinsic mechanisms common to most somatic cells enforce senescence at the so-called "Hayflick limit" of 60 population doublings. Termed "replicative senescence", this mechanism prevents cellular immortalization and suppresses oncogenesis. Although it is possible to overcome the Hayflick limit by genetically modifying cells, such manipulations are regarded as prohibitively dangerous in the context of tissue engineering. On the other hand, senescence associated with extrinsic factors, often called "stress-induced" senescence, can be avoided simply by modifying culture conditions. Because stress-induced senescence is "premature" in the sense that it can halt growth well before the Hayflick limit is reached, growth potential can be significantly enhanced by minimizing culture related stress. Standard culture techniques were originally developed to optimize the growth of fibroblasts but these conditions are inherently stressful to many other cell types. In particular, the 21% oxygen levels used in standard incubators, though well tolerated by fibroblasts, appear to induce oxidative stress in other cells. We reasoned that chondrocytes and MSCs, which are adapted to relatively low oxygen levels in vivo, might be sensitive to this form of stress. To test this hypothesis we compared the growth of MSC and chondrocyte strains in 21% and 5% oxygen. We found that incubation in 21% oxygen significantly attenuated growth and was associated with increased oxidant production. These findings indicated that sub-optimal standard culture conditions sharply limited the expansion of MSC and chondrocyte populations and suggest that cultures for grafting purposes should be maintained in a low-oxygen environment. PMID:15296200

  20. A proposed technique for vehicle tracking, direction, and speed determination

    NASA Astrophysics Data System (ADS)

    Fisher, Paul S.; Angaye, Cleopas O.; Fisher, Howard P.

    2004-12-01

    A technique for recognition of vehicles in terms of direction, distance, and rate of change is presented. This represents very early work on this problem with significant hurdles still to be addressed. These are discussed in the paper. However, preliminary results also show promise for this technique for use in security and defense environments where the penetration of a perimeter is of concern. The material described herein indicates a process whereby the protection of a barrier could be augmented by computers and installed cameras assisting the individuals charged with this responsibility. The technique we employ is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage and recognition where conventional mathematical models don"t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a priori value for all derived rules. The rules are obtained from exemplar data sets, and are derived by a technique called Factoring, yielding a table of rules called a Ruling. These rules can then be used in pattern recognition applications such as described in this paper.

  1. Cooperative Quantum-Behaved Particle Swarm Optimization with Dynamic Varying Search Areas and Lévy Flight Disturbance

    PubMed Central

    Li, Desheng

    2014-01-01

    This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem. PMID:24851085

  2. Design Optimization Toolkit: Users' Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less

  3. Technical errors in planar bone scanning.

    PubMed

    Naddaf, Sleiman Y; Collier, B David; Elgazzar, Abdelhamid H; Khalil, Magdy M

    2004-09-01

    Optimal technique for planar bone scanning improves image quality, which in turn improves diagnostic efficacy. Because planar bone scanning is one of the most frequently performed nuclear medicine examinations, maintaining high standards for this examination is a daily concern for most nuclear medicine departments. Although some problems such as patient motion are frequently encountered, the degraded images produced by many other deviations from optimal technique are rarely seen in clinical practice and therefore may be difficult to recognize. The objectives of this article are to list optimal techniques for 3-phase and whole-body bone scanning, to describe and illustrate a selection of deviations from these optimal techniques for planar bone scanning, and to explain how to minimize or avoid such technical errors.

  4. Mathematical Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Bellman, R. (Editor)

    1963-01-01

    The papers collected in this volume were presented at the Symposium on Mathematical Optimization Techniques held in the Santa Monica Civic Auditorium, Santa Monica, California, on October 18-20, 1960. The objective of the symposium was to bring together, for the purpose of mutual education, mathematicians, scientists, and engineers interested in modern optimization techniques. Some 250 persons attended. The techniques discussed included recent developments in linear, integer, convex, and dynamic programming as well as the variational processes surrounding optimal guidance, flight trajectories, statistical decisions, structural configurations, and adaptive control systems. The symposium was sponsored jointly by the University of California, with assistance from the National Science Foundation, the Office of Naval Research, the National Aeronautics and Space Administration, and The RAND Corporation, through Air Force Project RAND.

  5. Improving Upon String Methods for Transition State Discovery.

    PubMed

    Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker

    2012-02-14

    Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.

  6. Underwater Robot Task Planning Using Multi-Objective Meta-Heuristics

    PubMed Central

    Landa-Torres, Itziar; Manjarres, Diana; Bilbao, Sonia; Del Ser, Javier

    2017-01-01

    Robotics deployed in the underwater medium are subject to stringent operational conditions that impose a high degree of criticality on the allocation of resources and the schedule of operations in mission planning. In this context the so-called cost of a mission must be considered as an additional criterion when designing optimal task schedules within the mission at hand. Such a cost can be conceived as the impact of the mission on the robotic resources themselves, which range from the consumption of battery to other negative effects such as mechanic erosion. This manuscript focuses on this issue by devising three heuristic solvers aimed at efficiently scheduling tasks in robotic swarms, which collaborate together to accomplish a mission, and by presenting experimental results obtained over realistic scenarios in the underwater environment. The heuristic techniques resort to a Random-Keys encoding strategy to represent the allocation of robots to tasks and the relative execution order of such tasks within the schedule of certain robots. The obtained results reveal interesting differences in terms of Pareto optimality and spread between the algorithms considered in the benchmark, which are insightful for the selection of a proper task scheduler in real underwater campaigns. PMID:28375160

  7. Interactions between Flight Dynamics and Propulsion Systems of Air-Breathing Hypersonic Vehicles

    NASA Astrophysics Data System (ADS)

    Dalle, Derek J.

    The development and application of a first-principles-derived reduced-order model called MASIV (Michigan/AFRL Scramjet In Vehicle) for an air-breathing hypersonic vehicle is discussed. Several significant and previously unreported aspects of hypersonic flight are investigated. A fortunate coupling between increasing Mach number and decreasing angle of attack is shown to extend the range of operating conditions for a class of supersonic inlets. Detailed maps of isolator unstart and ram-to-scram transition are shown on the flight corridor map for the first time. In scram mode the airflow remains supersonic throughout the engine, while in ram mode there is a region of subsonic flow. Accurately predicting the transition between these two modes requires models for complex shock interactions, finite-rate chemistry, fuel-air mixing, pre-combustion shock trains, and thermal choking, which are incorporated into a unified framework here. Isolator unstart occurs when the pre-combustion shock train is longer than the isolator, which blocks airflow from entering the engine. Finally, cooptimization of the vehicle design and trajectory is discussed. An optimal control technique is introduced that greatly reduces the number of computations required to optimize the simulated trajectory.

  8. Underwater Robot Task Planning Using Multi-Objective Meta-Heuristics.

    PubMed

    Landa-Torres, Itziar; Manjarres, Diana; Bilbao, Sonia; Del Ser, Javier

    2017-04-04

    Robotics deployed in the underwater medium are subject to stringent operational conditions that impose a high degree of criticality on the allocation of resources and the schedule of operations in mission planning. In this context the so-called cost of a mission must be considered as an additional criterion when designing optimal task schedules within the mission at hand. Such a cost can be conceived as the impact of the mission on the robotic resources themselves, which range from the consumption of battery to other negative effects such as mechanic erosion. This manuscript focuses on this issue by devising three heuristic solvers aimed at efficiently scheduling tasks in robotic swarms, which collaborate together to accomplish a mission, and by presenting experimental results obtained over realistic scenarios in the underwater environment. The heuristic techniques resort to a Random-Keys encoding strategy to represent the allocation of robots to tasks and the relative execution order of such tasks within the schedule of certain robots. The obtained results reveal interesting differences in terms of Pareto optimality and spread between the algorithms considered in the benchmark, which are insightful for the selection of a proper task scheduler in real underwater campaigns.

  9. Power Generation from a Radiative Thermal Source Using a Large-Area Infrared Rectenna

    NASA Astrophysics Data System (ADS)

    Shank, Joshua; Kadlec, Emil A.; Jarecki, Robert L.; Starbuck, Andrew; Howell, Stephen; Peters, David W.; Davids, Paul S.

    2018-05-01

    Electrical power generation from a moderate-temperature thermal source by means of direct conversion of infrared radiation is important and highly desirable for energy harvesting from waste heat and micropower applications. Here, we demonstrate direct rectified power generation from an unbiased large-area nanoantenna-coupled tunnel diode rectifier called a rectenna. Using a vacuum radiometric measurement technique with irradiation from a temperature-stabilized thermal source, a generated power density of 8 nW /cm2 is observed at a source temperature of 450 °C for the unbiased rectenna across an optimized load resistance. The optimized load resistance for the peak power generation for each temperature coincides with the tunnel diode resistance at zero bias and corresponds to the impedance matching condition for a rectifying antenna. Current-voltage measurements of a thermally illuminated large-area rectenna show current zero crossing shifts into the second quadrant indicating rectification. Photon-assisted tunneling in the unbiased rectenna is modeled as the mechanism for the large short-circuit photocurrents observed where the photon energy serves as an effective bias across the tunnel junction. The measured current and voltage across the load resistor as a function of the thermal source temperature represents direct current electrical power generation.

  10. Robustness of reduced-order observer-based controllers in transitional 2D Blasius boundary layers

    NASA Astrophysics Data System (ADS)

    Belson, Brandt; Semeraro, Onofrio; Rowley, Clarence; Pralits, Jan; Henningson, Dan

    2011-11-01

    In this work, we seek to delay transition in the Blasius boundary layer. We trip the flow with an upstream disturbance and dampen the growth of the resulting structures downstream. The observer-based controllers use a single sensor and a single localized body force near the wall. To formulate the controllers, we first find a reduced-order model of the system via the Eigensystem Realization Algorithm (ERA), then find the H2 optimal controller for this reduced-order system. We find the resulting controllers are effective only when the sensor is upstream of the actuator (in a feedforward configuration), but as is expected, are sensitive to model uncertainty. When the sensor is downstream of the actuator (in a feedback configuration), the reduced-order observer-based controllers are not robust and ineffective on the full system. In order to investigate the robustness properties of the system, an iterative technique called the adjoint of the direct adjoint (ADA) is employed to find a full-dimensional H2 optimal controller. This avoids the reduced-order modelling step and serves as a reference point. ADA is promising for investigating the lack of robustness previously mentioned.

  11. Shining a light on high volume photocurable materials.

    PubMed

    Palin, William M; Leprince, Julian G; Hadis, Mohammed A

    2018-05-01

    Spatial and temporal control is a key advantage for placement and rapid setting of light-activated resin composites. Conventionally, placement of multiple thin layers (<2mm) reduces the effect of light attenuation through highly filled and pigmented materials to increase polymerisation at the base of the restoration. However, and although light curing greater than 2mm thick layers is not an entirely new phenomenon, the desire amongst dental practitioners for even more rapid processing in deep cavities has led to the growing acceptance of so-called "bulk fill" (4-6mm thick) resin composites that are irradiated for 10-20s in daily clinical practice. The change in light transmission and attenuation during photopolymerisation are complex and related to path length, absorption properties of the photoinitiator and pigment, optical properties of the resin and filler and filler morphology. Understanding how light is transmitted through depth is therefore critical for ensuring optimal material properties at the base of thick increments. This article will briefly highlight the advent of current commercial materials that rationalise bulk filling techniques in dentistry, the relationship between light transmission and polymerisation and how optimal curing depths might be achieved. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  12. Design of phononic band gaps in functionally graded piezocomposite materials by using topology optimization

    NASA Astrophysics Data System (ADS)

    Vatanabe, Sandro L.; Silva, Emílio C. N.

    2011-04-01

    One of the properties of composite materials is the possibility of having phononic band gaps, within which sound and vibrations at certain frequencies do not propagate. These materials are called Phononic Crystals (PCs). PCs with large band gaps are of great interest for many applications, such as transducers, elastic/ acoustic filters, noise control, and vibration shields. Most of previous works concentrates on PCs made of elastic isotropic materials; however, band gaps can be enlarged by using non-isotropic materials, such as piezoelectric materials. Since the main property of PCs is the presence of band gaps, one possible way to design structures which have a desired band gap is through Topology Optimization Method (TOM). TOM is a computational technique that determines the layout of a material such that a prescribed objective is maximized. Functionally Graded Materials (FGM) are composite materials whose properties vary gradually and continuously along a specific direction within the domain of the material. One of the advantages of applying the FGM concept to TOM is that it is not necessary a discrete 0-1 result, once the material gradation is part of the solution. Therefore, the interpretation step becomes easier and the dispersion diagram obtained from the optimization is not significantly modified. In this work, the main objective is to optimize the position and width of piezocomposite materials band gaps. Finite element analysis is implemented with Bloch-Floquet theory to solve the dynamic behavior of two-dimensional functionally graded unit cells. The results demonstrate that phononic band gaps can be designed by using this methodology.

  13. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  14. Method of optimization onboard communication network

    NASA Astrophysics Data System (ADS)

    Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.

    2018-02-01

    In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.

  15. Seven-spot ladybird optimization: a novel and efficient metaheuristic algorithm for numerical optimization.

    PubMed

    Wang, Peng; Zhu, Zhouquan; Huang, Shuai

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.

  16. Seven-Spot Ladybird Optimization: A Novel and Efficient Metaheuristic Algorithm for Numerical Optimization

    PubMed Central

    Zhu, Zhouquan

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879

  17. An adaptive sharing elitist evolution strategy for multiobjective optimization.

    PubMed

    Costa, Lino; Oliveira, Pedro

    2003-01-01

    Almost all approaches to multiobjective optimization are based on Genetic Algorithms (GAs), and implementations based on Evolution Strategies (ESs) are very rare. Thus, it is crucial to investigate how ESs can be extended to multiobjective optimization, since they have, in the past, proven to be powerful single objective optimizers. In this paper, we present a new approach to multiobjective optimization, based on ESs. We call this approach the Multiobjective Elitist Evolution Strategy (MEES) as it incorporates several mechanisms, like elitism, that improve its performance. When compared with other algorithms, MEES shows very promising results in terms of performance.

  18. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  19. Optimal systems of geoscience surveying A preliminary discussion

    NASA Astrophysics Data System (ADS)

    Shoji, Tetsuya

    2006-10-01

    In any geoscience survey, each survey technique must be effectively applied, and many techniques are often combined optimally. An important task is to get necessary and sufficient information to meet the requirement of the survey. A prize-penalty function quantifies effectiveness of the survey, and hence can be used to determine the best survey technique. On the other hand, an information-cost function can be used to determine the optimal combination of survey techniques on the basis of the geoinformation obtained. Entropy is available to evaluate geoinformation. A simple model suggests the possibility that low-resolvability techniques are generally applied at early stages of survey, and that higher-resolvability techniques should alternate with lower-resolvability ones with the progress of the survey.

  20. AI in CALL--Artificially Inflated or Almost Imminent?

    ERIC Educational Resources Information Center

    Schulze, Mathias

    2008-01-01

    The application of techniques from artificial intelligence (AI) to CALL has commonly been referred to as intelligent CALL (ICALL). ICALL is only slightly older than the "CALICO Journal", and this paper looks back at a quarter century of published research mainly in North America and by North American scholars. This "inventory…

  1. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  2. NEAT: a spatial telescope to detect nearby exoplanets using astrometry

    NASA Astrophysics Data System (ADS)

    Crouzier, Antoine

    2015-01-01

    With the present state of exoplanet detection techniques, none of the rocky planets of the Solar System would be discovered, yet their presence is a very strong constraint on the scenarios of formation of planetary systems. Astrometry, by measuring the reflex effect of planets on their central host stars, lead us to the mass of planets and to their orbit determination. This technique is used frequently and is very successful to determine the masses and the orbits of binary stars. From space, it is possible to use differential astrometry around nearby Solar-type stars to detect exoplanets down to one Earth mass in habitable zone, where the sensitivity of the technique is optimal. Finding habitable Earths in the Solar neighborhood would be a major step forward for exoplanet detection and these planets would be prime targets for attempting to find life outside of the Solar System, by searching for bio-markers in their atmospheres. A scientific consortium has formed to promote this kind of astrometric space mission. A mission called NEAT (Nearby Earth Astrometric Telescope) has been proposed to ESA in 2010. A laboratory testbed called NEAT-demo was assembled at IPAG, its main goal is to demonstrate CCD detector calibration to the required accuracy. During my PhD, my activities were related to astrophysical aspects as well as instrumental aspects of the mission. Regarding the scientific case, I compiled a catalog of mission target stars and reference stars (needed for the differential astrometric measurements) and I estimated the scientific return of NEAT-like missions in terms of number of detected exoplanets and their parameter distributions. The second aspect of the PhD is relative to the testbed, which mimics the NEAT telescope configuration. I am going to present the testbed itself, the data analysis methods and the results. An accuracy of 3e-4 pixel was obtained for the relative positions of artificial stars and we have determined that measures of pixel positions by the metrology is currently limited by stray light.

  3. Early Breast Cancer Diagnosis Using Microwave Imaging via Space-Frequency Algorithm

    NASA Astrophysics Data System (ADS)

    Vemulapalli, Spandana

    The conventional breast cancer detection methods have limitations ranging from ionizing radiations, low specificity to high cost. These limitations make way for a suitable alternative called Microwave Imaging, as a screening technique in the detection of breast cancer. The discernible differences between the benign, malignant and healthy breast tissues and the ability to overcome the harmful effects of ionizing radiations make microwave imaging, a feasible breast cancer detection technique. Earlier studies have shown the variation of electrical properties of healthy and malignant tissues as a function of frequency and hence stimulates high bandwidth requirement. A Ultrawideband, Wideband and Narrowband arrays have been designed, simulated and optimized for high (44%), medium (33%) and low (7%) bandwidths respectively, using the EM (electromagnetic software) called FEKO. These arrays are then used to illuminate the breast model (phantom) and the received backscattered signals are obtained in the near field for each case. The Microwave Imaging via Space-Time (MIST) beamforming algorithm in the frequency domain, is next applied to these near field backscattered monostatic frequency response signals for the image reconstruction of the breast model. The main purpose of this investigation is to access the impact of bandwidth and implement a novel imaging technique for use in the early detection of breast cancer. Earlier studies show the implementation of the MIST imaging algorithm on the time domain signals via a frequency domain beamformer. The performance evaluation of the imaging algorithm on the frequency response signals has been carried out in the frequency domain. The energy profile of the breast in the spatial domain is created via the frequency domain Parseval's theorem. The beamformer weights calculated using these the MIST algorithm (not including the effect of the skin) has been calculated for Ultrawideband, Wideband and Narrowband arrays, respectively. Quality metrics such as dynamic range, radiometric resolution etc. are also evaluated for all the three types of arrays.

  4. Improvement of mechanical properties of polymeric composites: Experimental methods and new systems

    NASA Astrophysics Data System (ADS)

    Nguyen, Felix Nhanchau

    Filler- (e.g., particulate or fiber) reinforced structural polymers or polymeric composites have changed the way things are made. Today, they are found, for example, in air/ground transportation vehicles, sporting goods, ballistic barrier applications and weapons, electronic packaging, musical instruments, fashion items, and more. As the demand increases, so does the desire to have not only well balanced mechanical properties, but also light weight and low cost. This leads to a constant search for novel constituents and additives, new fabrication methods and analytical techniques. To achieve new or improved composite materials requires more than the identification of the right reinforcements to be used with the right polymer matrix at the right loading. Also, an optimized adhesion between the two phases and a toughened matrix system are needed. This calls for new methods to predict, modify and assess the level of adhesion, and new developments in matrix tougheners to minimize compromises in other mechanical/thermal properties. Furthermore, structural optimization, associated with fabrication (e.g., avoidance of fiber-fiber touching or particle aggregation), and sometimes special properties, such as electrical conductivity or magnetic susceptibility are necessary. Finally, the composite system's durability, often under hostile conditions, is generally mandatory. The present study researches new predictive and experimental methods for optimizing and characterizing filler-matrix adhesion and develops a new type of epoxy tougheners. Specifically, (1) a simple thermodynamic parameter evaluated by UNIFAC is applied successfully to screen out candidate adhesion promoters, which is necessary for optimization of the physio-chemical interactions between the two phases; (2) an optical-acoustical mechanical test assisted with an acoustic emission technique is developed to de-convolute filler debonding/delamination among many other micro failure events, and (3) novel core (thermoplastic)-shell (dendrimer) nanoparticles are synthesized and incorporated in epoxy to enhance both stiffness and the polymer's fracture toughness or resistance to crack growth. This unique dendrimer has the possibility of acting both as an adhesion promoter and filler spacer, when applied to the filler surface, and as a matrix enhancer, when combined with other materials, with the unique ability to improve mechanical/thermal/electrical properties. These developments should help in the creation of the next generation of polymeric composites.

  5. Cryogenic Eyesafer Laser Optimization for Use Without Liquid Nitrogen

    DTIC Science & Technology

    2014-02-01

    liquid cryogens. This calls for optimal performance around 125–150 K—high enough for reasonably efficient operation of a Stirling cooler. We...state laser system with an optimum operating temperature somewhat higher—ideally 125–150 K—can be identified, then a Stirling cooler can be used to...needed to optimize laser performance in the desired temperature range. This did not include actual use of Stirling coolers, but rather involved both

  6. Common aero vehicle autonomous reentry trajectory optimization satisfying waypoint and no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Jorris, Timothy R.

    2007-12-01

    To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.

  7. Acceleration techniques in the univariate Lipschitz global optimization

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela

    2016-10-01

    Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.

  8. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  9. Structural damage identification using an enhanced thermal exchange optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Dadras, A.

    2018-03-01

    The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.

  10. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  11. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  12. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  13. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models

    PubMed Central

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  14. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data

    PubMed Central

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-01-01

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308

  15. Pros and cons of characterising an optical translocation setup

    NASA Astrophysics Data System (ADS)

    Maphanga, Charles; Malabi, Rudzani; Ombinda-Lemboumba, Saturnin; Maaza, Malik; Mthunzi-Kufa, Patience

    2017-02-01

    The delivery of genetic material and drugs into mammalian cells using femtosecond (fs) laser pulses is escalating rapidly. This novel light based technique achieved through a precise focusing of a laser beam on the plasma membrane is called photoporation. This technique is attained using ultrashort laser pulses to irradiate plasma membrane of mammalian cells, thus resulting in the accumulation of a vast amount of free electrons. These generated electrons react photochemically with the cell membrane, resulting in the generation of sub-microscopic pores on the cell membrane enabling a variety of extracellular media to diffuse into the cell. This study is aimed at critically analysing the "do's and don'ts" of designing, assembling, and characterising an optical translocation setup using a femtosecond legend titanium sapphire regenerative amplifier pulsed laser (Gaussian beam, 800 nm, 1 kHz, 113 fs, and an output power of 850 mW). The main objective in our study is to determine optical phototranslocation parameters which are compatible to the plasma membrane and cell viability. Such parameters included beam profiling, testing a range of laser fluencies suitable for photoporation, assessment of the beam quality and laser-cell interaction time. In our study, Chinese Hamster Ovary-K1 (CHO-K1) cells were photoporated in the presence of trypan blue to determine optimal parameters for photoporation experiment. An average power of 4.5 μW, exposure time of 7 ms, with a laser beam spot of 1.1 μm diameter at the focus worked optimally without any sign of cell stress and cytoplasmic bleeding. Cellular responses post laser treatment were analysed using cell morphology studies.

  16. A Dynamic Optimization Technique for Siting the NASA-Clark Atlanta Urban Rain Gauge Network (NCURN)

    NASA Technical Reports Server (NTRS)

    Shepherd, J. Marshall; Taylor, Layi

    2003-01-01

    NASA satellites and ground instruments have indicated that cities like Atlanta, Georgia may create or alter rainfall. Scientists speculate that the urban heat island caused by man-made surfaces in cities impact the heat and wind patterns that form clouds and rainfall. However, more conclusive evidence is required to substantiate findings from satellites. NASA, along with scientists at Clark Atlanta University, are implementing a dense, urban rain gauge network in the metropolitan Atlanta area to support a satellite validation program called Studies of PRecipitation Anomalies from Widespread Urban Landuse (SPRAWL). SPRAWL will be conducted during the summer of 2003 to further identify and understand the impact of urban Atlanta on precipitation variability. The paper provides an. overview of SPRAWL, which represents one of the more comprehensive efforts in recent years to focus exclusively on urban-impacted rainfall. The paper also introduces a novel technique for deploying rain gauges for SPRAWL. The deployment of the dense Atlanta network is unique because it utilizes Geographic Information Systems (GIS) and Decision Support Systems (DSS) to optimize deployment of the rain gauges. These computer aided systems consider access to roads, drainage systems, tree cover, and other factors in guiding the deployment of the gauge network. GIS and DSS also provide decision-makers with additional resources and flexibility to make informed decisions while considering numerous factors. Also, the new Atlanta network and SPRAWL provide a unique opportunity to merge the high-resolution, urban rain gauge network with satellite-derived rainfall products to understand how cities are changing rainfall patterns, and possibly climate.

  17. Dynamic Reconstruction and Multivariable Control for Force-Actuated, Thin Facesheet Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Grocott, Simon C. O.; Miller, David W.

    1997-01-01

    The Multiple Mirror Telescope (MMT) under development at the University of Arizona takes a new approach in adaptive optics placing a large (0.65 m) force-actuated, thin facesheet deformable mirror at the secondary of an astronomical telescope, thus reducing the effects of emissivity which are important in IR astronomy. However, The large size of the mirror and low stiffness actuators used drive the natural frequencies of the mirror down into the bandwidth of the atmospheric distortion. Conventional adaptive optics takes a quasi-static approach to controlling the, deformable mirror. However, flexibility within the control bandwidth calls for a new approach to adaptive optics. Dynamic influence functions are used to characterize the influence of each actuator on the surface of the deformable mirror. A linearized model of atmospheric distortion is combined with dynamic influence functions to produce a dynamic reconstructor. This dynamic reconstructor is recognized as an optimal control problem. Solving the optimal control problem for a system with hundreds of actuators and sensors is formidable. Exploiting the circularly symmetric geometry of the mirror, and a suitable model of atmospheric distortion, the control problem is divided into a number of smaller decoupled control problems using circulant matrix theory. A hierarchic control scheme which seeks to emulate the quasi-static control approach that is generally used in adaptive optics is compared to the proposed dynamic reconstruction technique. Although dynamic reconstruction requires somewhat more computational power to implement, it achieves better performance with less power usage, and is less sensitive than the hierarchic technique.

  18. Study the impact of rainfall on the United Arab Emirates dams using remote sensing and image processing techniques

    NASA Astrophysics Data System (ADS)

    Al Marzouqi, Fatima A.; Al Besher, Shaikha A.; Al Mansoori, Saeed H.

    2017-10-01

    The United Arab Emirates (UAE) has given great attention to the environment and sustainable development through applications of best practices of global standards that ensure optimal investment in natural resources. Since the UAE is located in an arid region which is known as dry, sandy and get a small amount of rainfall, thus the water resources are limited and accordingly, the government has initiated an integrated water resources management (IWRM) strategy to meet the increasing demands of water. Dams are considered as one of the important strategies that are suitable for this arid region. An event of rainfall if between heavy to severe in a short duration could cause flash floods and damages to population centers and areas of agriculture nearby. To prevent that from happening, several dams and barriers were built to protect human life and infrastructure. Besides contribution to enhance the water resources and use them optimally to irrigate the growing agricultural areas across the country. Geographically, most of the dams were located in the northern and eastern part of the UAE, around mountainous areas. This study aims to monitor the changes that occurred to five dams of the north-eastern region of the UAE during 2015 and 2016 through the use of remote sensing technology of optical images captured by "DubaiSat-2". The segmentation approach utilized in this study is based on a band ratio technique called Normalized Difference Water Index (NDWI). The experimental results revealed that the proposed approach is efficient in detecting dams from multispectral satellite images.

  19. Parallel and Preemptable Dynamically Dimensioned Search Algorithms for Single and Multi-objective Optimization in Water Resources

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.

    2015-12-01

    We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.

  20. Shape and Reinforcement Optimization of Underground Tunnels

    NASA Astrophysics Data System (ADS)

    Ghabraie, Kazem; Xie, Yi Min; Huang, Xiaodong; Ren, Gang

    Design of support system and selecting an optimum shape for the opening are two important steps in designing excavations in rock masses. Currently selecting the shape and support design are mainly based on designer's judgment and experience. Both of these problems can be viewed as material distribution problems where one needs to find the optimum distribution of a material in a domain. Topology optimization techniques have proved to be useful in solving these kinds of problems in structural design. Recently the application of topology optimization techniques in reinforcement design around underground excavations has been studied by some researchers. In this paper a three-phase material model will be introduced changing between normal rock, reinforced rock, and void. Using such a material model both problems of shape and reinforcement design can be solved together. A well-known topology optimization technique used in structural design is bi-directional evolutionary structural optimization (BESO). In this paper the BESO technique has been extended to simultaneously optimize the shape of the opening and the distribution of reinforcements. Validity and capability of the proposed approach have been investigated through some examples.

  1. Development of an autonomous treatment planning strategy for radiation therapy with effective use of population-based prior data.

    PubMed

    Wang, Huan; Dong, Peng; Liu, Hongcheng; Xing, Lei

    2017-02-01

    Current treatment planning remains a costly and labor intensive procedure and requires multiple trial-and-error adjustments of system parameters such as the weighting factors and prescriptions. The purpose of this work is to develop an autonomous treatment planning strategy with effective use of prior knowledge and in a clinically realistic treatment planning platform to facilitate radiation therapy workflow. Our technique consists of three major components: (i) a clinical treatment planning system (TPS); (ii) a formulation of decision-function constructed using an assemble of prior treatment plans; (iii) a plan evaluator or decision-function and an outer-loop optimization independent of the clinical TPS to assess the TPS-generated plan and to drive the search toward a solution optimizing the decision-function. Microsoft (MS) Visual Studio Coded UI is applied to record some common planner-TPS interactions as subroutines for querying and interacting with the TPS. These subroutines are called back in the outer-loop optimization program to navigate the plan selection process through the solution space iteratively. The utility of the approach is demonstrated by using clinical prostate and head-and-neck cases. An autonomous treatment planning technique with effective use of an assemble of prior treatment plans is developed to automatically maneuver the clinical treatment planning process in the platform of a commercial TPS. The process mimics the decision-making process of a human planner and provides a clinically sensible treatment plan automatically, thus reducing/eliminating the tedious manual trial-and-errors of treatment planning. It is found that the prostate and head-and-neck treatment plans generated using the approach compare favorably with that used for the patients' actual treatments. Clinical inverse treatment planning process can be automated effectively with the guidance of an assemble of prior treatment plans. The approach has the potential to significantly improve the radiation therapy workflow. © 2016 American Association of Physicists in Medicine.

  2. Introducing TreeCollapse: a novel greedy algorithm to solve the cophylogeny reconstruction problem.

    PubMed

    Drinkwater, Benjamin; Charleston, Michael A

    2014-01-01

    Cophylogeny mapping is used to uncover deep coevolutionary associations between two or more phylogenetic histories at a macro coevolutionary scale. As cophylogeny mapping is NP-Hard, this technique relies heavily on heuristics to solve all but the most trivial cases. One notable approach utilises a metaheuristic to search only a subset of the exponential number of fixed node orderings possible for the phylogenetic histories in question. This is of particular interest as it is the only known heuristic that guarantees biologically feasible solutions. This has enabled research to focus on larger coevolutionary systems, such as coevolutionary associations between figs and their pollinator wasps, including over 200 taxa. Although able to converge on solutions for problem instances of this size, a reduction from the current cubic running time is required to handle larger systems, such as Wolbachia and their insect hosts. Rather than solving this underlying problem optimally this work presents a greedy algorithm called TreeCollapse, which uses common topological patterns to recover an approximation of the coevolutionary history where the internal node ordering is fixed. This approach offers a significant speed-up compared to previous methods, running in linear time. This algorithm has been applied to over 100 well-known coevolutionary systems converging on Pareto optimal solutions in over 68% of test cases, even where in some cases the Pareto optimal solution has not previously been recoverable. Further, while TreeCollapse applies a local search technique, it can guarantee solutions are biologically feasible, making this the fastest method that can provide such a guarantee. As a result, we argue that the newly proposed algorithm is a valuable addition to the field of coevolutionary research. Not only does it offer a significantly faster method to estimate the cost of cophylogeny mappings but by using this approach, in conjunction with existing heuristics, it can assist in recovering a larger subset of the Pareto front than has previously been possible.

  3. An optimized posterior axillary boost technique in radiation therapy to supraclavicular and axillary lymph nodes: A comparative study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, Victor, E-mail: vhernandezmasgrau@gmail.com; Arenas, Meritxell; Müller, Katrin

    2013-01-01

    To assess the advantages of an optimized posterior axillary (AX) boost technique for the irradiation of supraclavicular (SC) and AX lymph nodes. Five techniques for the treatment of SC and levels I, II, and III AX lymph nodes were evaluated for 10 patients selected at random: a direct anterior field (AP); an anterior to posterior parallel pair (AP-PA); an anterior field with a posterior axillary boost (PAB); an anterior field with an anterior axillary boost (AAB); and an optimized PAB technique (OptPAB). The target coverage, hot spots, irradiated volume, and dose to organs at risk were evaluated and a statisticalmore » analysis comparison was performed. The AP technique delivered insufficient dose to the deeper AX nodes. The AP-PA technique produced larger irradiated volumes and higher mean lung doses than the other techniques. The PAB and AAB techniques originated excessive hot spots in most of the cases. The OptPAB technique produced moderate hot spots while maintaining a similar planning target volume (PTV) coverage, irradiated volume, and dose to organs at risk. This optimized technique combines the advantages of the PAB and AP-PA techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. The presented technique is simple, fast, and easy to implement in routine clinical practice and is superior to the techniques historically used for the treatment of SC and AX lymph nodes.« less

  4. Ant Colony Optimization Algorithm for Centralized Dynamic Channel Allocation in Multi-Cell OFDMA Systems

    NASA Astrophysics Data System (ADS)

    Kim, Hyo-Su; Kim, Dong-Hoi

    The dynamic channel allocation (DCA) scheme in multi-cell systems causes serious inter-cell interference (ICI) problem to some existing calls when channels for new calls are allocated. Such a problem can be addressed by advanced centralized DCA design that is able to minimize ICI. Thus, in this paper, a centralized DCA is developed for the downlink of multi-cell orthogonal frequency division multiple access (OFDMA) systems with full spectral reuse. However, in practice, as the search space of channel assignment for centralized DCA scheme in multi-cell systems grows exponentially with the increase of the number of required calls, channels, and cells, it becomes an NP-hard problem and is currently too complicated to find an optimum channel allocation. In this paper, we propose an ant colony optimization (ACO) based DCA scheme using a low-complexity ACO algorithm which is a kind of heuristic algorithm in order to solve the aforementioned problem. Simulation results demonstrate significant performance improvements compared to the existing schemes in terms of the grade of service (GoS) performance and the forced termination probability of existing calls without degrading the system performance of the average throughput.

  5. A comparison in Colorado of three methods to monitor breeding amphibians

    USGS Publications Warehouse

    Corn, P.S.; Muths, E.; Iko, W.M.

    2000-01-01

    We surveyed amphibians at 4 montane and 2 plains lentic sites in northern Colorado using 3 techniques: standardized call surveys, automated recording devices (frog-loggers), and intensive surveys including capture-recapture techniques. Amphibians were observed at 5 sites. Species richness varied from 0 to 4 species at each site. Richness scores, the sums of species richness among sites, were similar among methods: 8 for call surveys, 10 for frog-loggers, and 11 for intensive surveys (9 if the non-vocal salamander Ambystoma tigrinum is excluded). The frog-logger at 1 site recorded Spea bombifrons which was not active during the times when call and intensive surveys were conducted. Relative abundance scores from call surveys failed to reflect a relatively large population of Bufo woodhousii at 1 site and only weakly differentiated among different-sized populations of Pseudacris maculata at 3 other sites. For extensive applications, call surveys have the lowest costs and fewest requirements for highly trained personnel. However, for a variety of reasons, call surveys cannot be used with equal effectiveness in all parts of North America.

  6. System OptimizatIon of the Glow Discharge Optical Spectroscopy Technique Used for Impurity Profiling of ION Implanted Gallium Arsenide.

    DTIC Science & Technology

    1980-12-01

    AFIT/GEO/EE/80D-1 I -’ SYSTEM OPTIMIZATION OF THE GLOW DISCHARGE OPTICAL SPECTROSCOPY TECHNIQUE USED FOR IMPURITY PROFILING OF ION IMPLANTED GALLIUM ...EE/80D-1 (\\) SYSTEM OPTIMIZATION OF THE GLOW DISCHARGE OPTICAL SPECTROSCOPY TECHNIQUE USED FOR IMPURITY PROFILING OF ION IMPLANTED GALLIUM ARSENIDE...semiconductors, specifically annealed and unan- nealed ion implanted gallium arsenide (GaAs). Methods to improve the sensitivity of the GDOS system have

  7. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial

    PubMed Central

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-01-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes. PMID:28763039

  8. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.

    PubMed

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-08-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

  9. Towards robust optimal design of storm water systems

    NASA Astrophysics Data System (ADS)

    Marquez Calvo, Oscar; Solomatine, Dimitri

    2015-04-01

    In this study the focus is on the design of a storm water or a combined sewer system. Such a system should be capable to handle properly most of the storm to minimize the damages caused by flooding due to the lack of capacity of the system to cope with rain water at peak times. This problem is a multi-objective optimization problem: we have to take into account the minimization of the construction costs, the minimization of damage costs due to flooding, and possibly other criteria. One of the most important factors influencing the design of storm water systems is the expected amount of water to deal with. It is common that this infrastructure is developed with the capacity to cope with events that occur once in, say 10 or 20 years - so-called design rainfall events. However, rainfall is a random variable and such uncertainty typically is not taken explicitly into account in optimization. Rainfall design data is based on historical information of rainfalls, but many times this data is based on unreliable measures; or in not enough historical information; or as we know, the patterns of rainfall are changing regardless of historical information. There are also other sources of uncertainty influencing design, for example, leakages in the pipes and accumulation of sediments in pipes. In the context of storm water or combined sewer systems design or rehabilitation, robust optimization technique should be able to find the best design (or rehabilitation plan) within the available budget but taking into account uncertainty in those variables that were used to design the system. In this work we consider various approaches to robust optimization proposed by various authors (Gabrel, Murat, Thiele 2013; Beyer, Sendhoff 2007) and test a novel method ROPAR (Solomatine 2012) to analyze robustness. References Beyer, H.G., & Sendhoff, B. (2007). Robust optimization - A comprehensive survey. Comput. Methods Appl. Mech. Engrg., 3190-3218. Gabrel, V.; Murat, C., Thiele, A. (2014). Recent advances in robust optimization: An overview. European Journal of Operational Research. 471-483. Solomatine, D.P. (2012). Robust Optimization and Probabilistic Analysis of Robustness (ROPAR). http://www.unesco-ihe.org/hi/sol/papers/ ROPAR.pdf.

  10. Modified harmony search

    NASA Astrophysics Data System (ADS)

    Mohamed, Najihah; Lutfi Amri Ramli, Ahmad; Majid, Ahmad Abd; Piah, Abd Rahni Mt

    2017-09-01

    A metaheuristic algorithm, called Harmony Search is quite highly applied in optimizing parameters in many areas. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. Propose in this paper Modified Harmony Search for solving optimization problems, which employs a concept from genetic algorithm method and particle swarm optimization for generating new solution vectors that enhances the performance of HS algorithm. The performances of MHS and HS are investigated on ten benchmark optimization problems in order to make a comparison to reflect the efficiency of the MHS in terms of final accuracy, convergence speed and robustness.

  11. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    NASA Astrophysics Data System (ADS)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  12. Optimization and analysis of large chemical kinetic mechanisms using the solution mapping method - Combustion of methane

    NASA Technical Reports Server (NTRS)

    Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.

    1992-01-01

    A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.

  13. Using stand-level optimization to reduce crown fire hazard

    Treesearch

    David H. Graetz; John Sessions; Steven L. Garman

    2007-01-01

    This study evaluated the ability to generate prescriptions for a wide variety of stands when the goal is to reduce crown fire potential. Forest managers charged with reducing crown fire potential while providing for commodity and ecological production have been hampered by the complexity of possible management options. A program called Stand-Level Optimization with...

  14. Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization

    ERIC Educational Resources Information Center

    Gelman, Andrew; Lee, Daniel; Guo, Jiqiang

    2015-01-01

    Stan is a free and open-source C++ program that performs Bayesian inference or optimization for arbitrary user-specified models and can be called from the command line, R, Python, Matlab, or Julia and has great promise for fitting large and complex statistical models in many areas of application. We discuss Stan from users' and developers'…

  15. REopt Lite Web Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NREL developed a free, publicly available web version of the REopt (TM) renewable energy integration and optimization platform called REopt Lite. REopt Lite recommends the optimal size and dispatch strategy for grid-connected photovoltaics (PV) and battery storage at a site. It also allows users to explore how PV and storage can increase a site's resiliency during a grid outage.

  16. [Body, rights and comprehensive health: Analysis of the parliamentary debates on the Gender Identity and Assisted Fertilization Laws (Argentina, 2011-2013)].

    PubMed

    Farji Neer, Anahí

    2015-09-01

    In this paper we present an analysis of the parliamentary debates of the Gender Identity Law (No. 26743) and the Assisted Fertilization Law (No. 26862) carried out in the Argentine National Congress between 2011 and 2013. Using a qualitative content analysis technique, the stenographic records of the debates were analyzed to explore the following questions: How was the public problem to which each law responds characterized? How was the mission of each law conceptualized? To what extent did those definitions call into question ideas of health and illness, in including in the public health system coverage for certain medical treatments of body optimization or modification? In the process of sanctioning both laws, the concepts of health and disease were put into dispute as moral categories. In this context, an expanded concept of comprehensive health arose, in which desires regarding reproduction and the body were included.

  17. Micromachined fragment capturer for biomedical applications.

    PubMed

    Choi, Young-Soo; Lee, Dong-Weon

    2011-11-01

    Due to changes in modern diet, a form of heart disease called chronic total occlusion has become a serious disease to be treated as an emergency. In this study, we propose a micromachined capturer that is designed and fabricated to collect plaque fragments generated during surgery to remove the thrombus. The fragment capturer consists of a plastic body made by rapid prototyping, SU-8 mesh structures using MEMS techniques, and ionic polymer metal composite (IPMC) actuators. An array of IPMC actuators combined with the SU-8 net structure was optimized to effectively collect plaque fragments. The evaporation of solvent through the actuator's surface was prevented using a coating of SU-8 and polydimethylsiloxane thin film on the actuator. This approach improved the available operating time of the IPMC, which primarily depends on solvent loss. Our preliminary results demonstrate the possibility of using the capturer for biomedical applications. © 2011 American Institute of Physics

  18. eGSM: A extended Sky Model of Diffuse Radio Emission

    NASA Astrophysics Data System (ADS)

    Kim, Doyeon; Liu, Adrian; Switzer, Eric

    2018-01-01

    Both cosmic microwave background and 21cm cosmology observations must contend with astrophysical foreground contaminants in the form of diffuse radio emission. For precise cosmological measurements, these foregrounds must be accurately modeled over the entire sky Ideally, such full-sky models ought to be primarily motivated by observations. Yet in practice, these observations are limited, with data sets that are observed not only in a heterogenous fashion, but also over limited frequency ranges. Previously, the Global Sky Model (GSM) took some steps towards solving the problem of incomplete observational data by interpolating over multi-frequency maps using principal component analysis (PCA).In this poster, we present an extended version of GSM (called eGSM) that includes the following improvements: 1) better zero-level calibration 2) incorporation of non-uniform survey resolutions and sky coverage 3) the ability to quantify uncertainties in sky models 4) the ability to optimally select spectral models using Bayesian Evidence techniques.

  19. A Review of the Cell to Graphene-Based Nanomaterial Interface

    NASA Astrophysics Data System (ADS)

    Darbandi, Arash; Gottardo, Erik; Huff, Joshua; Stroscio, Michael; Shokuhfar, Tolou

    2018-04-01

    The area of cellular interactions of nanomaterials is an important research interest. The sensitivity of cells toward their extracellular matrix allows researchers to create microenvironments for guided stem cell differentiation. Among nanomaterials, graphene, often called the "wonder material," and its derivatives are at the forefront of such endeavors. Graphene's carbon backbone, paired with its biocompatibility and ease of functionalization, has been used as an enhanced method of controlled cell proliferation. Graphene's honeycomb nature allows for compatibility with polymers and biological material for the creation of nanocomposite scaffolds that help differentiation into cell types that have otherwise been proven difficult. Such materials and their role in guiding cell growth can aid the construction of tissue grafts where shortages and patient compatibility create a low success rate. This review will bring together novel studies and techniques used to understand and optimizes graphene's role in cell growth mechanisms.

  20. Alkaline Comet Assay for Assessing DNA Damage in Individual Cells.

    PubMed

    Pu, Xinzhu; Wang, Zemin; Klaunig, James E

    2015-08-06

    Single-cell gel electrophoresis, commonly called a comet assay, is a simple and sensitive method for assessing DNA damage at the single-cell level. It is an important technique in genetic toxicological studies. The comet assay performed under alkaline conditions (pH >13) is considered the optimal version for identifying agents with genotoxic activity. The alkaline comet assay is capable of detecting DNA double-strand breaks, single-strand breaks, alkali-labile sites, DNA-DNA/DNA-protein cross-linking, and incomplete excision repair sites. The inclusion of digestion of lesion-specific DNA repair enzymes in the procedure allows the detection of various DNA base alterations, such as oxidative base damage. This unit describes alkaline comet assay procedures for assessing DNA strand breaks and oxidative base alterations. These methods can be applied in a variety of cells from in vitro and in vivo experiments, as well as human studies. Copyright © 2015 John Wiley & Sons, Inc.

  1. Photothermal tomography for the functional and structural evaluation, and early mineral loss monitoring in bones.

    PubMed

    Kaiplavil, Sreekumar; Mandelis, Andreas; Wang, Xueding; Feng, Ting

    2014-08-01

    Salient features of a new non-ionizing bone diagnostics technique, truncated-correlation photothermal coherence tomography (TC-PCT), exhibiting optical-grade contrast and capable of resolving the trabecular network in three dimensions through the cortical region with and without a soft-tissue overlayer are presented. The absolute nature and early demineralization-detection capability of a marker called thermal wave occupation index, estimated using the proposed modality, have been established. Selective imaging of regions of a specific mineral density range has been demonstrated in a mouse femur. The method is maximum-permissible-exposure compatible. In a matrix of bone and soft-tissue a depth range of ~3.8 mm has been achieved, which can be increased through instrumental and modulation waveform optimization. Furthermore, photoacoustic microscopy, a comparable modality with TC-PCT, has been used to resolve the trabecular structure and for comparison with the photothermal tomography.

  2. Photothermal tomography for the functional and structural evaluation, and early mineral loss monitoring in bones

    PubMed Central

    Kaiplavil, Sreekumar; Mandelis, Andreas; Wang, Xueding; Feng, Ting

    2014-01-01

    Salient features of a new non-ionizing bone diagnostics technique, truncated-correlation photothermal coherence tomography (TC-PCT), exhibiting optical-grade contrast and capable of resolving the trabecular network in three dimensions through the cortical region with and without a soft-tissue overlayer are presented. The absolute nature and early demineralization-detection capability of a marker called thermal wave occupation index, estimated using the proposed modality, have been established. Selective imaging of regions of a specific mineral density range has been demonstrated in a mouse femur. The method is maximum-permissible-exposure compatible. In a matrix of bone and soft-tissue a depth range of ~3.8 mm has been achieved, which can be increased through instrumental and modulation waveform optimization. Furthermore, photoacoustic microscopy, a comparable modality with TC-PCT, has been used to resolve the trabecular structure and for comparison with the photothermal tomography. PMID:25136480

  3. Location and Size Planning of Distributed Photovoltaic Generation in Distribution network System Based on K-means Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Lu, Siqi; Wang, Xiaorong; Wu, Junyong

    2018-01-01

    The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.

  4. Pse-Analysis: a python package for DNA/RNA and protein/ peptide sequence analysis based on pseudo components and kernel methods.

    PubMed

    Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen

    2017-02-21

    To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.

  5. [Topographic anatomy of the hook region and its significance for the choice of the surgical technique for the cochlear implantation].

    PubMed

    Yanov, Yu K; Kuzovkov, V E; Lilenko, A S; Kostevich, I V; Sugarova, S B; Amonov, A Sh

    The mode of the introduction of the active electrode of a cochlear implant into the cochlea remains a key issue as far as cochlear implantation is concerned. Especially much attention has recently been given to the relationship between the anatomical features of the basal region of the cochlea (the so-called 'fish hook') and the possibility to approach it. We have undertaken the attempt to optimize the approach to the tympanic canal (scala tympanica) of the cochlea with a view to reducing to a minimum the risk of an injury to the cochlear structures in the course of cochlear implantation. A total of 35 cadaveric temporal bones were examined to measure the fine structures of the hook region and evaluate the risk of their damages associated with various approaches to the tympanic canal.

  6. Efficient Fluid Dynamic Design Optimization Using Cartesian Grids

    NASA Technical Reports Server (NTRS)

    Dadone, A.; Grossman, B.; Sellers, Bill (Technical Monitor)

    2004-01-01

    This report is subdivided in three parts. The first one reviews a new approach to the computation of inviscid flows using Cartesian grid methods. The crux of the method is the curvature-corrected symmetry technique (CCST) developed by the present authors for body-fitted grids. The method introduces ghost cells near the boundaries whose values are developed from an assumed flow-field model in vicinity of the wall consisting of a vortex flow, which satisfies the normal momentum equation and the non-penetration condition. The CCST boundary condition was shown to be substantially more accurate than traditional boundary condition approaches. This improved boundary condition is adapted to a Cartesian mesh formulation, which we call the Ghost Body-Cell Method (GBCM). In this approach, all cell centers exterior to the body are computed with fluxes at the four surrounding cell edges. There is no need for special treatment corresponding to cut cells which complicate other Cartesian mesh methods.

  7. Service Modeling for Service Engineering

    NASA Astrophysics Data System (ADS)

    Shimomura, Yoshiki; Tomiyama, Tetsuo

    Intensification of service and knowledge contents within product life cycles is considered crucial for dematerialization, in particular, to design optimal product-service systems from the viewpoint of environmentally conscious design and manufacturing in advanced post industrial societies. In addition to the environmental limitations, we are facing social limitations which include limitations of markets to accept increasing numbers of mass-produced artifacts and such environmental and social limitations are restraining economic growth. To attack and remove these problems, we need to reconsider the current mass production paradigm and to make products have more added values largely from knowledge and service contents to compensate volume reduction under the concept of dematerialization. Namely, dematerialization of products needs to enrich service contents. However, service was mainly discussed within marketing and has been mostly neglected within traditional engineering. Therefore, we need new engineering methods to look at services, rather than just functions, called "Service Engineering." To establish service engineering, this paper proposes a modeling technique of service.

  8. A fast time-difference inverse solver for 3D EIT with application to lung imaging.

    PubMed

    Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut

    2016-08-01

    A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.

  9. Cognitive Abilities Explain Wording Effects in the Rosenberg Self-Esteem Scale.

    PubMed

    Gnambs, Timo; Schroeders, Ulrich

    2017-12-01

    There is consensus that the 10 items of the Rosenberg Self-Esteem Scale (RSES) reflect wording effects resulting from positively and negatively keyed items. The present study examined the effects of cognitive abilities on the factor structure of the RSES with a novel, nonparametric latent variable technique called local structural equation models. In a nationally representative German large-scale assessment including 12,437 students competing measurement models for the RSES were compared: a bifactor model with a common factor and a specific factor for all negatively worded items had an optimal fit. Local structural equation models showed that the unidimensionality of the scale increased with higher levels of reading competence and reasoning, while the proportion of variance attributed to the negatively keyed items declined. Wording effects on the factor structure of the RSES seem to represent a response style artifact associated with cognitive abilities.

  10. Evaluation of Method-Specific Extraction Variability for the Measurement of Fatty Acids in a Candidate Infant/Adult Nutritional Formula Reference Material.

    PubMed

    Place, Benjamin J

    2017-05-01

    To address community needs, the National Institute of Standards and Technology has developed a candidate Standard Reference Material (SRM) for infant/adult nutritional formula based on milk and whey protein concentrates with isolated soy protein called SRM 1869 Infant/Adult Nutritional Formula. One major component of this candidate SRM is the fatty acid content. In this study, multiple extraction techniques were evaluated to quantify the fatty acids in this new material. Extraction methods that were based on lipid extraction followed by transesterification resulted in lower mass fraction values for all fatty acids than the values measured by methods utilizing in situ transesterification followed by fatty acid methyl ester extraction (ISTE). An ISTE method, based on the identified optimal parameters, was used to determine the fatty acid content of the new infant/adult nutritional formula reference material.

  11. Uncertainty Quantification in Aeroelasticity

    NASA Astrophysics Data System (ADS)

    Beran, Philip; Stanford, Bret; Schrock, Christopher

    2017-01-01

    Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.

  12. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  13. Optimization of orbital assignment and specification of service areas in satellite communications

    NASA Technical Reports Server (NTRS)

    Wang, Cou-Way; Levis, Curt A.; Buyukdura, O. Merih

    1987-01-01

    The mathematical nature of the orbital and frequency assignment problem for communications satellites is explored, and it is shown that choosing the correct permutations of the orbit locations and frequency assignments is an important step in arriving at values which satisfy the signal-quality requirements. Two methods are proposed to achieve better spectrum/orbit utilization. The first, called the delta S concept, leads to orbital assignment solutions via either mixed-integer or restricted basis entry linear programming techniques; the method guarantees good single-entry carrier-to-interference ratio results. In the second, a basis for specifying service areas is proposed for the Fixed Satellite Service. It is suggested that service areas should be specified according to the communications-demand density in conjunction with the delta S concept in order to enable the system planner to specify more satellites and provide more communications supply.

  14. More IMPATIENT: A Gridding-Accelerated Toeplitz-based Strategy for Non-Cartesian High-Resolution 3D MRI on GPUs

    PubMed Central

    Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.

    2013-01-01

    Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203

  15. Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem

    NASA Astrophysics Data System (ADS)

    Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa

    A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.

  16. Faster, better, cheaper: lean labs are the key to future survival.

    PubMed

    Bryant, Patsy M; Gulling, Richard D

    2006-03-28

    Process improvement techniques have been used in manufacturing for many years to rein in costs and improve quality. Health care is now grappling with similar challenges. The Department of Laboratory Services at Good Samaritan Hospital, a 560-bed facility in Dayton, OH, used the Lean process improvement method in a 12-week project to streamline its core laboratory processes. By analyzing the flow of samples through the system and identifying value-added and non-value-added steps, both in the laboratory and during the collection process, Good Samaritan's project team redesigned systems and reconfigured the core laboratory layout to trim collection-to-results time from 65 minutes to 40 minutes. As a result, virtually all morning results are available to physicians by 7 a.m., critical values are called to nursing units within 30 minutes, and core laboratory services are optimally staffed for maximum cost-effectiveness.

  17. Exploiting structure: Introduction and motivation

    NASA Technical Reports Server (NTRS)

    Xu, Zhong Ling

    1994-01-01

    This annual report summarizes the research activities that were performed from 26 Jun. 1993 to 28 Feb. 1994. We continued to investigate the Robust Stability of Systems where transfer functions or characteristic polynomials are affine multilinear functions of parameters. An approach that differs from 'Stability by Linear Process' and that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty was found for low order, 2-order, and 3-order cases. We proved a crucial theorem, the so-called Face Theorem. Previously, we have proven Kharitonov's Vertex Theorem and the Edge Theorem by Bartlett. The detail of this proof is contained in the Appendix. This Theorem provides a tool to describe the boundary of the image of the affine multilinear function. For SPR design, we have developed some new results. The third objective for this period is to design a controller for IHM by the H-infinity optimization technique. The details are presented in the Appendix.

  18. Radiation therapy planning with photons and protons for early and advanced breast cancer: an overview

    PubMed Central

    Weber, Damien C; Ares, Carmen; Lomax, Antony J; Kurtz, John M

    2006-01-01

    Postoperative radiation therapy substantially decreases local relapse and moderately reduces breast cancer mortality, but can be associated with increased late mortality due to cardiovascular morbidity and secondary malignancies. Sophistication of breast irradiation techniques, including conformal radiotherapy and intensity modulated radiation therapy, has been shown to markedly reduce cardiac and lung irradiation. The delivery of more conformal treatment can also be achieved with particle beam therapy using protons. Protons have superior dose distributional qualities compared to photons, as dose deposition occurs in a modulated narrow zone, called the Bragg peak. As a result, further dose optimization in breast cancer treatment can be reasonably expected with protons. In this review, we outline the potential indications and benefits of breast cancer radiotherapy with protons. Comparative planning studies and preliminary clinical data are detailed and future developments are considered. PMID:16857055

  19. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Further Development, Support and Enhancement of CONDUIT

    NASA Technical Reports Server (NTRS)

    Veronica, Moldoveanu; Levine, William S.

    1999-01-01

    From the first airplanes steered by handles, wheels, and pedals to today's advanced aircraft, there has been a century of revolutionary inventions, all of them contributing to flight quality. The stability and controllability of aircraft as they appear to a pilot are called flying or handling qualities. Many years after the first airplanes flew, flying qualities were identified and ranked from desirable to unsatisfactory. Later on engineers developed design methods to satisfy these practical criteria. CONDUIT, which stands for Control Designer's Unified Interface, is a modern software package that provides a methodology for optimization of flight control systems in order to improve the flying qualities. CONDUIT is dependent on an the optimization engine called CONSOL-OPTCAD (C-O). C-O performs multicriterion parametric optimization. C-O was successfully tested on a variety of control problems. The optimization-based computational system, C-O, requires a particular control system description as a MATLAB file and possesses the ability to modify the vector of design parameters in an attempt to satisfy performance objectives and constraints specified by the designer, in a C-type file. After the first optimization attempts on the UH-60A control system, an early interface system, named GIFCORCODE (Graphical Interface for CONSOL-OPTCAD for Rotorcraft Controller Design) was created.

  1. Approaches to optimization of SS/TDMA time slot assignment. [satellite switched time division multiple access

    NASA Technical Reports Server (NTRS)

    Wade, T. O.

    1984-01-01

    Reduction techniques for traffic matrices are explored in some detail. These matrices arise in satellite switched time-division multiple access (SS/TDMA) techniques whereby switching of uplink and downlink beams is required to facilitate interconnectivity of beam zones. A traffic matrix is given to represent that traffic to be transmitted from n uplink beams to n downlink beams within a TDMA frame typically of 1 ms duration. The frame is divided into segments of time and during each segment a portion of the traffic is represented by a switching mode. This time slot assignment is characterized by a mode matrix in which there is not more than a single non-zero entry on each line (row or column) of the matrix. Investigation is confined to decomposition of an n x n traffic matrix by mode matrices with a requirement that the decomposition be 100 percent efficient or, equivalently, that the line(s) in the original traffic matrix whose sum is maximal (called critical line(s)) remain maximal as mode matrices are subtracted throughout the decomposition process. A method of decomposition of an n x n traffic matrix by mode matrices results in a number of steps that is bounded by n(2) - 2n + 2. It is shown that this upper bound exists for an n x n matrix wherein all the lines are maximal (called a quasi doubly stochastic (QDS) matrix) or for an n x n matrix that is completely arbitrary. That is, the fact that no method can exist with a lower upper bound is shown for both QDS and arbitrary matrices, in an elementary and straightforward manner.

  2. Design optimization for permanent magnet machine with efficient slot per pole ratio

    NASA Astrophysics Data System (ADS)

    Potnuru, Upendra Kumar; Rao, P. Mallikarjuna

    2018-04-01

    This paper presents a methodology for the enhancement of a Brush Less Direct Current motor (BLDC) with 6Poles and 8slots. In particular; it is focused on amulti-objective optimization using a Genetic Algorithmand Grey Wolf Optimization developed in MATLAB. The optimization aims to maximize the maximum output power value and minimize the total losses of a motor. This paper presents an application of the MATLAB optimization algorithms to brushless DC (BLDC) motor design, with 7 design parameters chosen to be free. The optimal design parameters of the motor derived by GA are compared with those obtained by Grey Wolf Optimization technique. A comparative report on the specified enhancement approaches appearsthat Grey Wolf Optimization technique has a better convergence.

  3. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  4. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  5. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  6. Multidisciplinary Responses to the Sexual Victimization of Children: Use of Control Phone Calls.

    PubMed

    Canavan, J William; Borowski, Christine; Essex, Stacy; Perkowski, Stefan

    2017-10-01

    This descriptive study addresses the question of the value of one-party consent phone calls regarding the sexual victimization of children. The authors reviewed 4 years of experience with children between the ages of 3 and 18 years selected for the control phone calls after a forensic interview by the New York State Police forensic interviewer. The forensic interviewer identified appropriate cases for control phone calls considering New York State law, the child's capacity to make the call, the presence of another person to make the call and a supportive residence. The control phone call process has been extremely effective forensically. Offenders choose to avoid trial by taking a plea bargain thereby dramatically speeding up the criminal judicial and family court processes. An additional outcome of the control phone call is the alleged offender's own words saved the child from the trauma of testifying in court. The control phone call reduced the need for children to repeat their stories to various interviewers. A successful control phone call gives the child a sense of vindication. This technique is the only technique that preserves the actual communication pattern between the alleged victim and the alleged offender. This can be of great value to the mental health professionals working with both the child and the alleged offender. Cautions must be considered regarding potential serious adverse effects on the child. The multidisciplinary team members must work together in the control phone call. The descriptive nature of this study did not allow the authors adequate demographic data, a subject that should be addressed in future prospective study.

  7. Wavelet-bounded empirical mode decomposition for measured time series analysis

    NASA Astrophysics Data System (ADS)

    Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2018-01-01

    Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.

  8. A microfluidic chaotic mixer platform for cancer stem cell immunocapture and release

    NASA Astrophysics Data System (ADS)

    Shaner, Sebastian Wesley

    Isolation of exceedingly rare and ambiguous cells, like cancer stem cells (CSCs), from a pool of other abundant cells is a daunting task primarily due to the inadequately defined properties of such cells. With phenotypes of different CSCs fairly well-defined, immunocapturing of CSCs is a desirable cell-specific capture technique. A microfluidic device is a proven candidate that offers the platform for user-constrained microenvironments that can be optimized for small-scale volumetric flow experimentation. In this study, we show how a well-known passive micromixer design (staggered herringbone mixer - SHM) can be optimized to induce maximum chaotic mixing within antibody-laced microchannels and, ultimately, promote CSC capture. The device's (Cancer Stem Cell Capture Chip - CSC3 (TM)) principle design configuration is called: Single-Walled Staggered Herringbone (SWaSH). The CSC3 (TM) was constructed of a polydimethylsiloxane (PDMS) foundation and thinly coated with an alginate hydrogel derivatized with streptavidin. The results of our work showed that the non-stickiness of alginate and antigen-specific antibodies allowed for superb target-specific cell isolation and negligible non-specific cell binding. Future engineering design directions include developing new configurations (e.g. Staggered High-Low Herringbone (SHiLoH) and offset SHiLoH) to optimize microvortex generation within the microchannels. This study's qualitative and quantitative results can help stimulate progress into refinements in device design and prospective advancements in cancer stem cell isolation and more comprehensive single-cell and cluster analysis.

  9. Optimization of a constrained linear monochromator design for neutral atom beams.

    PubMed

    Kaltenbacher, Thomas

    2016-04-01

    A focused ground state, neutral atom beam, exploiting its de Broglie wavelength by means of atom optics, is used for neutral atom microscopy imaging. Employing Fresnel zone plates as a lens for these beams is a well established microscopy technique. To date, even for favorable beam source conditions a minimal focus spot size of slightly below 1μm was reached. This limitation is essentially given by the intrinsic spectral purity of the beam in combination with the chromatic aberration of the diffraction based zone plate. Therefore, it is important to enhance the monochromaticity of the beam, enabling a higher spatial resolution, preferably below 100nm. We propose to increase the monochromaticity of a neutral atom beam by means of a so-called linear monochromator set-up - a Fresnel zone plate in combination with a pinhole aperture - in order to gain more than one order of magnitude in spatial resolution. This configuration is known in X-ray microscopy and has proven to be useful, but has not been applied to neutral atom beams. The main result of this work is optimal design parameters based on models for this linear monochromator set-up followed by a second zone plate for focusing. The optimization was performed for minimizing the focal spot size and maximizing the centre line intensity at the detector position for an atom beam simultaneously. The results presented in this work are for, but not limited to, a neutral helium atom beam. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks.

    PubMed

    Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan

    2015-07-29

    Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.

  11. Polyelectrolyte assisted charge titration spectrometry: Applications to latex and oxide nanoparticles.

    PubMed

    Mousseau, F; Vitorazi, L; Herrmann, L; Mornet, S; Berret, J-F

    2016-08-01

    The electrostatic charge density of particles is of paramount importance for the control of the dispersion stability. Conventional methods use potentiometric, conductometric or turbidity titration but require large amount of samples. Here we report a simple and cost-effective method called polyelectrolyte assisted charge titration spectrometry or PACTS. The technique takes advantage of the propensity of oppositely charged polymers and particles to assemble upon mixing, leading to aggregation or phase separation. The mixed dispersions exhibit a maximum in light scattering as a function of the volumetric ratio X, and the peak position XMax is linked to the particle charge density according to σ∼D0XMax where D0 is the particle diameter. The PACTS is successfully applied to organic latex, aluminum and silicon oxide particles of positive or negative charge using poly(diallyldimethylammonium chloride) and poly(sodium 4-styrenesulfonate). The protocol is also optimized with respect to important parameters such as pH and concentration, and to the polyelectrolyte molecular weight. The advantages of the PACTS technique are that it requires minute amounts of sample and that it is suitable to a broad variety of charged nano-objects. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. [Surgery using master-slave manipulators and telementoring].

    PubMed

    Furukawa, T; Wakabayashi, G; Ozawa, S; Watanabe, M; Ohgami, M; Kitagawa, Y; Ishii, S; Arisawa, Y; Ohmori, T; Nohga, K; Kitajima, M

    2000-03-01

    Master-slave manipulators enhance surgeons' dexterity and improve the precision of surgical techniques by filtering out surgeons' tremors and scaling the movements of surgical instruments. Among clinically available master-slave manipulators, the epoch-making system called "da Vinci" developed by Intuitive Surgical Inc. (Mountain View, CA, USA), equipped with 2 articulated joints at the tip of the surgical instruments allowing 7 degrees of freedom, mimics the movements of surgeons' wrists and fingers in the abdominal or thoracic cavity. Today advanced telecommunications technology provides us excellent motion images using only 3-ISDN telephone lines. Experienced surgeons at primary surgical sites have been able to perform complex procedures successfully by consulting specialists at remote sites. Because telecommunications costs have become lower each year, telementoring will be come a routine surgical practice in the near future. The usefulness of surgical telementoring has been greatly enhanced by the development of a technique to illustrate on video images from two directions. Moreover, remote advisory surgeons will be able to provide the optimal operative field to operating surgeons using robotic camera holders with voice-recognition systems. In the near future, when master-slave manipulators will also be coupled with telementoring systems, remote experts could actually perform complex surgical procedures.

  13. A comparison of four streamflow record extension techniques

    USGS Publications Warehouse

    Hirsch, Robert M.

    1982-01-01

    One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., ‘line of organic correlation,’ ‘reduced major axis,’ ‘unique solution,’ and ‘equivalence line.’ The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.

  14. Scalloping minimization in deep Si etching on Unaxis DSE tools

    NASA Astrophysics Data System (ADS)

    Lai, Shouliang; Johnson, Dave J.; Westerman, Russ J.; Nolan, John J.; Purser, David; Devre, Mike

    2003-01-01

    Sidewall smoothness is often a critical requirement for many MEMS devices, such as microfludic devices, chemical, biological and optical transducers, while fast silicon etch rate is another. For such applications, the time division multiplex (TDM) etch processes, so-called "Bosch" processes are widely employed. However, in the conventional TDM processes, rough sidewalls result due to scallop formation. To date, the amplitude of the scalloping has been directly linked to the silicon etch rate. At Unaxis USA Inc., we have developed a proprietary fast gas switching technique that is effective for scalloping minimization in deep silicon etching processes. In this technique, process cycle times can be reduced from several seconds to as little as a fraction of second. Scallop amplitudes can be reduced with shorter process cycles. More importantly, as the scallop amplitude is progressively reduced, the silicon etch rate can be maintained relatively constant at high values. An optimized experiment has shown that at etch rate in excess of 7 μm/min, scallops with length of 116 nm and depth of 35 nm were obtained. The fast gas switching approach offers an ideal manufacturing solution for MEMS applications where extremely smooth sidewall and fast etch rate are crucial.

  15. Support vector machine and principal component analysis for microarray data classification

    NASA Astrophysics Data System (ADS)

    Astuti, Widi; Adiwijaya

    2018-03-01

    Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.

  16. Molecular Imaging in the Era of Personalized Medicine

    PubMed Central

    Jung, Kyung-Ho; Lee, Kyung-Han

    2015-01-01

    Clinical imaging creates visual representations of the body interior for disease assessment. The role of clinical imaging significantly overlaps with that of pathology, and diagnostic workflows largely depend on both fields. The field of clinical imaging is presently undergoing a radical change through the emergence of a new field called molecular imaging. This new technology, which lies at the intersection between imaging and molecular biology, enables noninvasive visualization of biochemical processes at the molecular level within living bodies. Molecular imaging differs from traditional anatomical imaging in that biomarkers known as imaging probes are used to visualize target molecules-of-interest. This ability opens up exciting new possibilities for applications in oncologic, neurological and cardiovascular diseases. Molecular imaging is expected to make major contributions to personalized medicine by allowing earlier diagnosis and predicting treatment response. The technique is also making a huge impact on pharmaceutical development by optimizing preclinical and clinical tests for new drug candidates. This review will describe the basic principles of molecular imaging and will briefly touch on three examples (from an immense list of new techniques) that may contribute to personalized medicine: receptor imaging, angiogenesis imaging, and apoptosis imaging. PMID:25812652

  17. Air-coupled ultrasound: a novel technique for monitoring the curing of thermosetting matrices.

    PubMed

    Lionetto, Francesca; Tarzia, Antonella; Maffezzoli, Alfonso

    2007-07-01

    A custom-made, air-coupled ultrasonic device was applied to cure monitoring of thick samples (7-10 mm) of unsaturated polyester resin at room temperature. A key point was the optimization of the experimental setup in order to propagate compression waves during the overall curing reaction by suitable placement of the noncontact transducers, placed on the same side of the test material, in the so-called pitch-catch configuration. The progress of polymerization was monitored through the variation of the time of flight of the propagating longitudinal waves. The exothermic character of the polymerization was taken into account by correcting the measured value of time of flight with that one in air, obtained by sampling the air velocity during the experiment. The air-coupled ultrasonic results were compared with those obtained from conventional contact ultrasonic measurements. The good agreement between the air-coupled ultrasonic results and those obtained by the rheological analysis demonstrated the reliability of air-coupled ultrasound in monitoring the changes of viscoelastic properties at gelation and vitrification. The position of the transducers on the same side of the sample makes this technique suitable for on-line cure monitoring during several composite manufacturing technologies.

  18. Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.

  19. International Perspectives on Quality Assurance and New Techniques in Radiation Medicine: Outcomes of an IAEA Conference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shortt, Ken; Davidsson, Lena; Hendry, Jolyon

    2008-05-01

    The International Atomic Energy Agency organized an international conference called, 'Quality Assurance and New Techniques in Radiation Medicine' (QANTRM). It dealt with quality assurance (QA) in all aspects of radiation medicine (diagnostic radiology, nuclear medicine, and radiotherapy) at the international level. Participants discussed QA issues pertaining to the implementation of new technologies and the need for education and staff training. The advantage of developing a comprehensive and harmonized approach to QA covering both the technical and the managerial issues was emphasized to ensure the optimization of benefits to patient safety and effectiveness. The necessary coupling between medical radiation imaging andmore » radiotherapy was stressed, particularly for advanced technologies. However, the need for a more systematic approach to the adoption of advanced technologies was underscored by a report on failures in intensity-modulated radiotherapy dosimetry auditing tests in the United States, which could imply inadequate implementation of QA for these new technologies. A plenary session addressed the socioeconomic impact of introducing advanced technologies in resource-limited settings. How shall the dual gaps, one in access to basic medical services and the other in access to high-quality modern technology, be addressed?.« less

  20. Molecular imaging in the era of personalized medicine.

    PubMed

    Jung, Kyung-Ho; Lee, Kyung-Han

    2015-01-01

    Clinical imaging creates visual representations of the body interior for disease assessment. The role of clinical imaging significantly overlaps with that of pathology, and diagnostic workflows largely depend on both fields. The field of clinical imaging is presently undergoing a radical change through the emergence of a new field called molecular imaging. This new technology, which lies at the intersection between imaging and molecular biology, enables noninvasive visualization of biochemical processes at the molecular level within living bodies. Molecular imaging differs from traditional anatomical imaging in that biomarkers known as imaging probes are used to visualize target molecules-of-interest. This ability opens up exciting new possibilities for applications in oncologic, neurological and cardiovascular diseases. Molecular imaging is expected to make major contributions to personalized medicine by allowing earlier diagnosis and predicting treatment response. The technique is also making a huge impact on pharmaceutical development by optimizing preclinical and clinical tests for new drug candidates. This review will describe the basic principles of molecular imaging and will briefly touch on three examples (from an immense list of new techniques) that may contribute to personalized medicine: receptor imaging, angiogenesis imaging, and apoptosis imaging.

Top