Algorithmic Mechanism Design of Evolutionary Computation
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Algorithmic Mechanism Design of Evolutionary Computation.
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Genetic Algorithm for the Design of Electro-Mechanical Sigma Delta Modulator MEMS Sensors
Wilcock, Reuben; Kraft, Michael
2011-01-01
This paper describes a novel design methodology using non-linear models for complex closed loop electro-mechanical sigma-delta modulators (EMΣΔM) that is based on genetic algorithms and statistical variation analysis. The proposed methodology is capable of quickly and efficiently designing high performance, high order, closed loop, near-optimal systems that are robust to sensor fabrication tolerances and electronic component variation. The use of full non-linear system models allows significant higher order non-ideal effects to be taken into account, improving accuracy and confidence in the results. To demonstrate the effectiveness of the approach, two design examples are presented including a 5th order low-pass EMΣΔM for a MEMS accelerometer, and a 6th order band-pass EMΣΔM for the sense mode of a MEMS gyroscope. Each example was designed using the system in less than one day, with very little manual intervention. The strength of the approach is verified by SNR performances of 109.2 dB and 92.4 dB for the low-pass and band-pass system respectively, coupled with excellent immunities to fabrication tolerances and parameter mismatch. PMID:22163691
Shook, Richard; /Marquette U. /SLAC
2010-08-25
The particle beam of the SXR (soft x-ray) beam line in the LCLS (Linac Coherent Light Source) has a high intensity in order to penetrate through samples at the atomic level. However, the intensity is so high that many experiments fail because of severe damage. To correct this issue, attenuators are put into the beam line to reduce this intensity to a level suitable for experimentation. Attenuation is defined as 'the gradual loss in intensity of any flux through a medium' by [1]. It is found that Beryllium and Boron Carbide can survive the intensity of the beam. At very thin films, both of these materials work very well as filters for reducing the beam intensity. Using a total of 12 filters, the first 9 being made of Beryllium and the rest made of Boron Carbide, the beam's energy range of photons can be attenuated between 800 eV and 9000 eV. The design of the filters allows attenuation for different beam intensities so that experiments can obtain different intensities from the beam if desired. The step of attenuation varies, but is relative to the thickness of the filter as a power function of 2. A relationship for this is f(n) = x{sub 0}2{sup n} where n is the step of attenuation desired and x{sub 0} is the initial thickness of the material. To allow for this desired variation, a mechanism must be designed within the test chamber. This is visualized using a 3D computer aided design modeling tool known as Solid Edge.
NASA Technical Reports Server (NTRS)
1976-01-01
Design concepts for a 1000 mw thermal stationary power plant employing the UF6 fueled gas core breeder reactor are examined. Three design combinations-gaseous UF6 core with a solid matrix blanket, gaseous UF6 core with a liquid blanket, and gaseous UF6 core with a circulating blanket were considered. Results show the gaseous UF6 core with a circulating blanket was best suited to the power plant concept.
Design of robust systolic algorithms
Varman, P.J.; Fussell, D.S.
1983-01-01
A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.
Fashion sketch design by interactive genetic algorithms
NASA Astrophysics Data System (ADS)
Mok, P. Y.; Wang, X. X.; Xu, J.; Kwok, Y. L.
2012-11-01
Computer aided design is vitally important for the modern industry, particularly for the creative industry. Fashion industry faced intensive challenges to shorten the product development process. In this paper, a methodology is proposed for sketch design based on interactive genetic algorithms. The sketch design system consists of a sketch design model, a database and a multi-stage sketch design engine. First, a sketch design model is developed based on the knowledge of fashion design to describe fashion product characteristics by using parameters. Second, a database is built based on the proposed sketch design model to define general style elements. Third, a multi-stage sketch design engine is used to construct the design. Moreover, an interactive genetic algorithm (IGA) is used to accelerate the sketch design process. The experimental results have demonstrated that the proposed method is effective in helping laypersons achieve satisfied fashion design sketches.
NASA Astrophysics Data System (ADS)
Beets, Timothy A.; Beno, Joseph H.; Chun, Moo-Young; Lee, Sungho; Park, Chan; Rafal, Marc; Worthington, Michael S.; Yuk, In-Soo
2012-09-01
A near-infrared spectrograph (NIRS) has been designed and proposed for utilization as a first-light instrument on the Giant Magellan Telescope (GMT). GMTNIRS includes modular JHK, LM spectrograph units mounted to two sides of a cryogenic optical bench. The optical bench and surrounding, protective radiation (thermal) shield are containerized within a rigid cryostat vessel, which mounts to the GMT instrument platform. A support structure on the secondary side of the optical bench provides multi-dimensional stiffness to the optical bench, to prevent excessive displacements of the optical components during tracking of the telescope. Extensive mechanical simulation and optimization was utilized to arrive at synergistic designs of the optical bench, support structure, cryostat, and thermal isolation system. Additionally, detailed steady-state and transient thermal analyses were conducted to optimize and verify the mechanical designs to maximize thermal efficiency and to size cryogenic coolers and conductors. This paper explains the mechanical and thermal design points stemming from optical component placement and mounting and structural and thermal characteristics needed to achieve instrument science requirements. The thermal and mechanical simulations will be described and the data will be summarized. Sufficient details of the analyses and data will be provided to validate the design decisions.
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Turner, Kagan
2004-01-01
The field of mechanism design is concerned with setting (incentives superimposed on) the utility functions of a group of players so as to induce desirable joint behavior of those players. It arose in the context of traditional equilibrium game theory applied to games involving human players. This has led it to have many implicit restrictions, which strongly limits its scope. In particular, it ignores many issues that are crucial for systems that are large (and therefore far off-equilibrium in general) and/or composed of non-human players (e.g., computer-based agents). This also means it has concentrated on issues that are often irrelevant in those broader domains (e.g., incentive compatibility). This paper illustrates these shortcomings by reviewing some of the recent theoretical work on the design of collectives, a body of work that constitutes a substantial broadening of mechanism design. It then presents computer experiments based on a recently suggested nanotechnology testbed that demonstrates the power of that extended version of mechanism design.
General lossless planar coupler design algorithms.
Vance, Rod
2015-08-01
This paper reviews and extends two classes of algorithms for the design of planar couplers with any unitary transfer matrix as design goals. Such couplers find use in optical sensing for fading free interferometry, coherent optical network demodulation, and also for quantum state preparation in quantum optical experiments and technology. The two classes are (1) "atomic coupler algorithms" decomposing a unitary transfer matrix into a planar network of 2×2 couplers, and (2) "Lie theoretic algorithms" concatenating unit cell devices with variable phase delay sets that form canonical coordinates for neighborhoods in the Lie group U(N), so that the concatenations realize any transfer matrix in U(N). As well as review, this paper gives (1) a Lie theoretic proof existence proof showing that both classes of algorithms work and (2) direct proofs of the efficacy of the "atomic coupler" algorithms. The Lie theoretic proof strengthens former results. 5×5 couplers designed by both methods are compared by Monte Carlo analysis, which would seem to imply atomic rather than Lie theoretic methods yield designs more resilient to manufacturing imperfections. PMID:26367295
Fast Fourier Transform algorithm design and tradeoffs
NASA Technical Reports Server (NTRS)
Kamin, Ray A., III; Adams, George B., III
1988-01-01
The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.
Automated Antenna Design with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Linden, Derek; Hornby, Greg; Lohn, Jason; Globus, Al; Krishunkumor, K.
2006-01-01
Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
Fuzzy logic and guidance algorithm design
Leng, G.
1994-12-31
This paper explores the use of fuzzy logic for the design of a terminal guidance algorithm for an air to surface missile against a stationary target. The design objectives are (1) a smooth transition, at lock-on, (2) large impact angles and (3) self-limiting acceleration commands. The method of reverse kinematics is used in the design of the membership functions and the rule base. Simulation results for a Mach 0.8 missile with a 6g acceleration limit are compared with a traditional proportional navigation scheme.
Multidisciplinary design optimization using genetic algorithms
NASA Astrophysics Data System (ADS)
Unal, Resit
1994-12-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared
Fuzzy controller design by parallel genetic algorithms
NASA Astrophysics Data System (ADS)
Mondelli, G.; Castellano, G.; Attolico, Giovanni; Distante, Arcangelo
1998-03-01
Designing a fuzzy system involves defining membership functions and constructing rules. Carrying out these two steps manually often results in a poorly performing system. Genetic Algorithms (GAs) has proved to be a useful tool for designing optimal fuzzy controller. In order to increase the efficiency and effectiveness of their application, parallel GAs (PAGs), evolving synchronously several populations with different balances between exploration and exploitation, have been implemented using a SIMD machine (APE100/Quadrics). The parameters to be identified are coded in such a way that the algorithm implicitly provides a compact fuzzy controller, by finding only necessary rules and removing useless inputs from them. Early results, working on a fuzzy controller implementing the wall-following task for a real vehicle as a test case, provided better fitness values in less generations with respect to previous experiments made using a sequential implementation of GAs.
Material design using surrogate optimization algorithm
NASA Astrophysics Data System (ADS)
Khadke, Kunal R.
Nanocomposite ceramics have been widely studied in order to tailor desired properties at high temperatures. Methodologies for development of material design are still under effect . While finite element modeling (FEM) provides significant insight on material behavior, few design researchers have addressed the design paradox that accompanies this rapid design space expansion. A surrogate optimization model management framework has been proposed to make this design process tractable. In the surrogate optimization material design tool, the analysis cost is reduced by performing simulations on the surrogate model instead of high density finite element model. The methodology is incorporated to find the optimal number of silicon carbide (SiC) particles, in a silicon-nitride Si3N 4 composite with maximum fracture energy [2]. Along with a deterministic optimization algorithm, model uncertainties have also been considered with the use of robust design optimization (RDO) method ensuring a design of minimum sensitivity to changes in the parameters. These methodologies applied to nanocomposites design have a signicant impact on cost and design cycle time reduced.
Designing conducting polymers using genetic algorithms
NASA Astrophysics Data System (ADS)
Giro, R.; Cyrillo, M.; Galvão, D. S.
2002-11-01
We have developed a new methodology to design conducting polymers with pre-specified properties. The methodology is based on the use of genetic algorithms (GAs) coupled to Negative Factor Counting technique. We present the results for a case study of polyanilines, one of the most important families of conducting polymers. The methodology proved to be able of generating automatic solutions for the problem of determining the optimum relative concentration for binary and ternary disordered polyaniline alloys exhibiting metallic properties. The methodology is completely general and can be used to design new classes of materials.
Predicting Resistance Mutations Using Protein Design Algorithms
Frey, K.; Georgiev, I; Donald, B; Anderson, A
2010-01-01
Drug resistance resulting from mutations to the target is an unfortunate common phenomenon that limits the lifetime of many of the most successful drugs. In contrast to the investigation of mutations after clinical exposure, it would be powerful to be able to incorporate strategies early in the development process to predict and overcome the effects of possible resistance mutations. Here we present a unique prospective application of an ensemble-based protein design algorithm, K*, to predict potential resistance mutations in dihydrofolate reductase from Staphylococcus aureus using positive design to maintain catalytic function and negative design to interfere with binding of a lead inhibitor. Enzyme inhibition assays show that three of the four highly-ranked predicted mutants are active yet display lower affinity (18-, 9-, and 13-fold) for the inhibitor. A crystal structure of the top-ranked mutant enzyme validates the predicted conformations of the mutated residues and the structural basis of the loss of potency. The use of protein design algorithms to predict resistance mutations could be incorporated in a lead design strategy against any target that is susceptible to mutational resistance.
Fast search algorithms for computational protein design.
Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie
2016-05-01
One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms. PMID:26833706
Problem Solving Techniques for the Design of Algorithms.
ERIC Educational Resources Information Center
Kant, Elaine; Newell, Allen
1984-01-01
Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.
Algorithm design of liquid lens inspection system
NASA Astrophysics Data System (ADS)
Hsieh, Lu-Lin; Wang, Chun-Chieh
2008-08-01
In mobile lens domain, the glass lens is often to be applied in high-resolution requirement situation; but the glass zoom lens needs to be collocated with movable machinery and voice-coil motor, which usually arises some space limits in minimum design. In high level molding component technology development, the appearance of liquid lens has become the focus of mobile phone and digital camera companies. The liquid lens sets with solid optical lens and driving circuit has replaced the original components. As a result, the volume requirement is decreased to merely 50% of the original design. Besides, with the high focus adjusting speed, low energy requirement, high durability, and low-cost manufacturing process, the liquid lens shows advantages in the competitive market. In the past, authors only need to inspect the scrape defect made by external force for the glass lens. As to the liquid lens, authors need to inspect the state of four different structural layers due to the different design and structure. In this paper, authors apply machine vision and digital image processing technology to administer inspections in the particular layer according to the needs of users. According to our experiment results, the algorithm proposed can automatically delete non-focus background, extract the region of interest, find out and analyze the defects efficiently in the particular layer. In the future, authors will combine the algorithm of the system with automatic-focus technology to implement the inside inspection based on the product inspective demands.
Optimal Design of Geodetic Network Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Vajedian, Sanaz; Bagheri, Hosein
2010-05-01
A geodetic network is a network which is measured exactly by techniques of terrestrial surveying based on measurement of angles and distances and can control stability of dams, towers and their around lands and can monitor deformation of surfaces. The main goals of an optimal geodetic network design process include finding proper location of control station (First order Design) as well as proper weight of observations (second order observation) in a way that satisfy all the criteria considered for quality of the network with itself is evaluated by the network's accuracy, reliability (internal and external), sensitivity and cost. The first-order design problem, can be dealt with as a numeric optimization problem. In this designing finding unknown coordinates of network stations is an important issue. For finding these unknown values, network geodetic observations that are angle and distance measurements must be entered in an adjustment method. In this regard, using inverse problem algorithms is needed. Inverse problem algorithms are methods to find optimal solutions for given problems and include classical and evolutionary computations. The classical approaches are analytical methods and are useful in finding the optimum solution of a continuous and differentiable function. Least squares (LS) method is one of the classical techniques that derive estimates for stochastic variables and their distribution parameters from observed samples. The evolutionary algorithms are adaptive procedures of optimization and search that find solutions to problems inspired by the mechanisms of natural evolution. These methods generate new points in the search space by applying operators to current points and statistically moving toward more optimal places in the search space. Genetic algorithm (GA) is an evolutionary algorithm considered in this paper. This algorithm starts with definition of initial population, and then the operators of selection, replication and variation are applied
Design of SPARC V8 superscalar pipeline applied Tomasulo's algorithm
NASA Astrophysics Data System (ADS)
Yang, Xue; Yu, Lixin; Feng, Yunkai
2014-04-01
A superscalar pipeline applied Tomasulo's algorithm is presented in this paper. The design begins with a dual-issue superscalar processor based on LEON2. Tomasulo's algorithm is adopted to implement out-of-order execution. Instructions are separated into three different parts and executed by three different function units so as to reduce area and promote execution speed. Results wrote back into registers are still in program order, for the aim of ensure the function veracity. Mechanisms of the reservation station, common data bus, and reorder buffer are presented in detail. The structure can sends and executes three instructions at most at a time. Branch prediction can also be realized by reorder buffer. Performance of the scalar pipeline applied Tomasulo's algorithm is promoted by 41.31% compared to singleissue pipeline..
Design of Protein-Protein Interactions with a Novel Ensemble-Based Scoring Algorithm
NASA Astrophysics Data System (ADS)
Roberts, Kyle E.; Cushing, Patrick R.; Boisguerin, Prisca; Madden, Dean R.; Donald, Bruce R.
Protein-protein interactions (PPIs) are vital for cell signaling, protein trafficking and localization, gene expression, and many other biological functions. Rational modification of PPI targets provides a mechanism to understand their function and importance. However, PPI systems often have many more degrees of freedom and flexibility than the small-molecule binding sites typically targeted by protein design algorithms. To handle these challenging design systems, we have built upon the computational protein design algorithm K * [8,19] to develop a new design algorithm to study protein-protein and protein-peptide interactions. We validated our algorithm through the design and experimental testing of novel peptide inhibitors.
UWB Tracking System Design with TDOA Algorithm
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan
2006-01-01
This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).
Optimal brushless DC motor design using genetic algorithms
NASA Astrophysics Data System (ADS)
Rahideh, A.; Korakianitis, T.; Ruiz, P.; Keeble, T.; Rothman, M. T.
2010-11-01
This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using a genetic algorithm. Characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. Electrical and mechanical requirements (i.e. voltage, torque and speed) and other limitations (e.g. upper and lower limits of the motor geometries) are cast into constraints of the optimization problem. One sample case is used to illustrate the design and optimization technique.
Algorithmic Processes for Increasing Design Efficiency.
ERIC Educational Resources Information Center
Terrell, William R.
1983-01-01
Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)
In-Trail Procedure (ITP) Algorithm Design
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Siminiceanu, Radu I.
2007-01-01
The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.
Optimization Algorithm for Designing Diffractive Optical Elements
NASA Astrophysics Data System (ADS)
Agudelo, Viviana A.; Orozco, Ricardo Amézquita
2008-04-01
Diffractive Optical Elements (DOEs) are commonly used in many applications such as laser beam shaping, recording of micro reliefs, wave front analysis, metrology and many others where they can replace single or multiple conventional optical elements (diffractive or refractive). One of the most versatile way to produce them, is to use computer assisted techniques for their design and optimization, as well as optical or electron beam micro-lithography techniques for the final fabrication. The fundamental figures of merit involved in the optimization of such devices are both the diffraction efficiency and the signal to noise ratio evaluated in the reconstructed wave front at the image plane. A design and optimization algorithm based on the error—reduction method (Gerchberg and Saxton) is proposed to obtain binary discrete phase-only Fresnel DOEs that will be used to produce specific intensity patterns. Some experimental results were obtained using a spatial light modulator acting as a binary programmable diffractive phase element. Although the DOEs optimized here are discrete in phase, they present an acceptable signal noise relation and diffraction efficiency.
Birefringent filter design by use of a modified genetic algorithm.
Wen, Mengtao; Yao, Jianping
2006-06-10
A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation angles and the element lengths are determined by the genetic algorithm to minimize the sidelobe levels of the filters. Being different from the normal genetic algorithm, the algorithm proposed reduces the problem space of the birefringent filter design to achieve faster speed and better performance. The design of 4-, 8-, and 14-section birefringent filters with an improved sidelobe suppression ratio is realized. A 4-section birefringent filter designed with the algorithm is experimentally realized. PMID:16761031
Organization mechanism and counting algorithm on vertex-cover solutions
NASA Astrophysics Data System (ADS)
Wei, Wei; Zhang, Renquan; Niu, Baolong; Guo, Binghui; Zheng, Zhiming
2015-04-01
Counting the solution number of combinational optimization problems is an important topic in the study of computational complexity, which is concerned with Vertex-Cover in this paper. First, we investigate organizations of Vertex-Cover solution spaces by the underlying connectivity of unfrozen vertices and provide facts on the global and local environment. Then, a Vertex-Cover Solution Number Counting Algorithm is proposed and its complexity analysis is provided, the results of which fit very well with the simulations and have a better performance than those by 1-RSB in the neighborhood of c = e for random graphs. Based on the algorithm, variation and fluctuation on the solution number the statistics are studied to reveal the evolution mechanism of the solution numbers. Furthermore, the marginal probability distributions on the solution space are investigated on both the random graph and scale-free graph to illustrate the different evolution characteristics of their solution spaces. Thus, doing solution number counting based on the graph expression of the solution space should be an alternative and meaningful way to study the hardness of NP-complete and #P-complete problems and the appropriate algorithm design can help to achieve better approximations of solving combinational optimization problems and the corresponding counting problems.
An assessment of 'shuffle algorithm' collision mechanics for particle simulations
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Boyd, Iain D.
1991-01-01
Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.
Testing block subdivision algorithms on block designs
NASA Astrophysics Data System (ADS)
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
Mechanical Design Handbook for Elastomers
NASA Technical Reports Server (NTRS)
Darlow, M.; Zorzi, E.
1986-01-01
Mechanical Design Handbook for Elastomers reviews state of art in elastomer-damper technology with particular emphasis on applications of highspeed rotor dampers. Self-contained reference but includes some theoretical discussion to help reader understand how and why dampers used for rotating machines. Handbook presents step-by-step procedure for design of elastomer dampers and detailed examples of actual elastomer damper applications.
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
Parallel optimization algorithms and their implementation in VLSI design
NASA Technical Reports Server (NTRS)
Lee, G.; Feeley, J. J.
1991-01-01
Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.
Optimal Pid Controller Design Using Adaptive Vurpso Algorithm
NASA Astrophysics Data System (ADS)
Zirkohi, Majid Moradi
2015-04-01
The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.
Aerodynamic optimum design of transonic turbine cascades using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Li, Jun; Feng, Zhenping; Chang, Jianzhong; Shen, Zuda
1997-06-01
This paper presents an aerodynamic optimum design method for transonic turbine cascades based on the Genetic Algorithms coupled to the inviscid flow Euler solver and the boundary-layer calculation. The Genetic Algorithms control the evolution of a population of cascades towards an optimum design. The fitness value of each string is evaluated using the flow solver. The design procedure has been developed and the behavior of the genetic algorithms has been tested. The objective functions of the design examples are the minimum mean-square deviation between the aimed pressure and computed pressure and the minimum amount of user expertise.
Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.
Omelyan, I P
2006-09-01
A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations. PMID:17025782
A parallel sparse algorithm targeting arterial fluid mechanics computations
NASA Astrophysics Data System (ADS)
Manguoglu, Murat; Takizawa, Kenji; Sameh, Ahmed H.; Tezduyar, Tayfun E.
2011-09-01
Iterative solution of large sparse nonsymmetric linear equation systems is one of the numerical challenges in arterial fluid-structure interaction computations. This is because the fluid mechanics parts of the fluid + structure block of the equation system that needs to be solved at every nonlinear iteration of each time step corresponds to incompressible flow, the computational domains include slender parts, and accurate wall shear stress calculations require boundary layer mesh refinement near the arterial walls. We propose a hybrid parallel sparse algorithm, domain-decomposing parallel solver (DDPS), to address this challenge. As the test case, we use a fluid mechanics equation system generated by starting with an arterial shape and flow field coming from an FSI computation and performing two time steps of fluid mechanics computation with a prescribed arterial shape change, also coming from the FSI computation. We show how the DDPS algorithm performs in solving the equation system and demonstrate the scalability of the algorithm.
A generalized algorithm to design finite field normal basis multipliers
NASA Technical Reports Server (NTRS)
Wang, C. C.
1986-01-01
Finite field arithmetic logic is central in the implementation of some error-correcting coders and some cryptographic devices. There is a need for good multiplication algorithms which can be easily realized. Massey and Omura recently developed a new multiplication algorithm for finite fields based on a normal basis representation. Using the normal basis representation, the design of the finite field multiplier is simple and regular. The fundamental design of the Massey-Omura multiplier is based on a design of a product function. In this article, a generalized algorithm to locate a normal basis in a field is first presented. Using this normal basis, an algorithm to construct the product function is then developed. This design does not depend on particular characteristics of the generator polynomial of the field.
A Parallel Genetic Algorithm for Automated Electronic Circuit Design
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)
2000-01-01
We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.
On the design, analysis, and implementation of efficient parallel algorithms
Sohn, S.M.
1989-01-01
There is considerable interest in developing algorithms for a variety of parallel computer architectures. This is not a trivial problem, although for certain models great progress has been made. Recently, general-purpose parallel machines have become available commercially. These machines possess widely varying interconnection topologies and data/instruction access schemes. It is important, therefore, to develop methodologies and design paradigms for not only synthesizing parallel algorithms from initial problem specifications, but also for mapping algorithms between different architectures. This work has considered both of these problems. A systolic array consists of a large collection of simple processors that are interconnected in a uniform pattern. The author has studied in detain the problem of mapping systolic algorithms onto more general-purpose parallel architectures such as the hypercube. The hypercube architecture is notable due to its symmetry and high connectivity, characteristics which are conducive to the efficient embedding of parallel algorithms. Although the parallel-to-parallel mapping techniques have yielded efficient target algorithms, it is not surprising that an algorithm designed directly for a particular parallel model would achieve superior performance. In this context, the author has developed hypercube algorithms for some important problems in speech and signal processing, text processing, language processing and artificial intelligence. These algorithms were implemented on a 64-node NCUBE/7 hypercube machine in order to evaluate their performance.
Mechanical flexible joint design document
NASA Technical Reports Server (NTRS)
Daily, Vic
1993-01-01
The purpose of this report is to document the status of the Mechanical Flexible Joint (MFJ) Design Subtask with the intent of halting work on the design. Recommendations for future work is included in the case that the task is to be resumed. The MFJ is designed to eliminate two failure points from the current flex joint configuration, the inner 'tripod configuration' and the outer containment jacket. The MFJ will also be designed to flex 13.5 degrees and have three degrees of freedom. By having three degrees of freedom, the MFJ will allow the Low Pressure Fuel Duct to twist and remove the necessity to angulate the full 11 degrees currently required. The current flex joints are very labor intensive and very costly and a simple alternative is being sought. The MFJ is designed with a greater angular displacement, with three degrees of freedom, to reside in the same overall envelope, to meet weight constraints of the current bellows, to be compatible with cryogenic fuel and oxidizers, and also to be man-rated.
Genetic algorithms for the construction of D-optimal designs
Heredia-Langner, Alejandro; Carlyle, W M.; Montgomery, D C.; Borror, Connie M.; Runger, George C.
2003-01-01
Computer-generated designs are useful for situations where standard factorial, fractional factorial or response surface designs cannot be easily employed. Alphabetically-optimal designs are the most widely used type of computer-generated designs, and of these, the D-optimal (or D-efficient) class of designs are extremely popular. D-optimal designs are usually constructed by algorithms that sequentially add and delete points from a potential design based using a candidate set of points spaced over the region of interest. We present a technique to generate D-efficient designs using genetic algorithms (GA). This approach eliminates the need to explicitly consider a candidate set of experimental points and it can handle highly constrained regions while maintaining a level of performance comparable to more traditional design construction techniques.
Evaluation of Mechanical Losses in Piezoelectric Plates using Genetic algorithm
NASA Astrophysics Data System (ADS)
Arnold, F. J.; Gonçalves, M. S.; Massaro, F. R.; Martins, P. S.
Numerical methods are used for the characterization of piezoelectric ceramics. A procedure based on genetic algorithm is applied to find the physical coefficients and mechanical losses. The coefficients are estimated from a minimum scoring of cost function. Electric impedances are calculated from Mason's model including mechanical losses constant and dependent on frequency as a linear function. The results show that the electric impedance percentage error in the investigated interval of frequencies decreases when mechanical losses depending on frequency are inserted in the model. A more accurate characterization of the piezoelectric ceramics mechanical losses should be considered as frequency dependent.
Algorithme intelligent d'optimisation d'un design structurel de grande envergure
NASA Astrophysics Data System (ADS)
Dominique, Stephane
The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number
The study on gear transmission multi-objective optimum design based on SQP algorithm
NASA Astrophysics Data System (ADS)
Li, Quancai; Qiao, Xuetao; Wu, Cuirong; Wang, Xingxing
2011-12-01
Gear mechanism is the most popular transmission mechanism; however, the traditional design method is complex and not accurate. Optimization design is the effective method to solve the above problems, used in gear design method. In many of the optimization software MATLAB, there are obvious advantage projects and numerical calculation. There is a single gear transmission as example, the mathematical model of gear transmission system, based on the analysis of the objective function, and on the basis of design variables and confirmation of choice restrictive conditions. The results show that the algorithm through MATLAB, the optimization designs, efficient, reliable, simple.
NASA Astrophysics Data System (ADS)
Grill, M.; Radovan, M.; Melchiorri, R.; Slanger, T. G.
2009-12-01
The Compact Echelle Spectrograph for Aeronomical Research (CESAR) covers the wavelength range from 300 to 1000 nm with a spectral resolution of 20,000. It is being constructed at SRI International with funds from the National Science Foundation's Major Research Instrumentation Program. Our goal is to significantly expand the range of upper atmospheric science investigations (nightglow, aurora, and dayglow emissions) by providing to aeronomers a high-throughput, high-dispersion, large-passband spectrograph by scaling an astronomical grade echelle spectrograph into a portable version capable of siting at multiple geophysically significant stations, heretofore only available to astronomers at a handful of large observatories. We present major aspects of the ongoing opto-mechanical design. The design incorporates lessons learned from the construction of the High Resolution Echelle Spectrometer (HiRES) and the Automated Planet Finder (APF) spectrometer, amongst others. All major optical components are mounted on kinematically fully determined hexapod structures, giving unprecedented three-dimensional adjustment capability. CESAR is designed to operate in an outdoors environment in remote locations such as the Poker Flat Research Range (PFRR) in Alaska. We present an enclosure concept that will allow CESAR to withstand the weather conditions found at such sites while still giving CESAR's fore-optics full access to the sky.
Acoustic design of rotor blades using a genetic algorithm
NASA Technical Reports Server (NTRS)
Wells, V. L.; Han, A. Y.; Crossley, W. A.
1995-01-01
A genetic algorithm coupled with a simplified acoustic analysis was used to generate low-noise rotor blade designs. The model includes thickness, steady loading and blade-vortex interaction noise estimates. The paper presents solutions for several variations in the fitness function, including thickness noise only, loading noise only, and combinations of the noise types. Preliminary results indicate that the analysis provides reasonable assessments of the noise produced, and that genetic algorithm successfully searches for 'good' designs. The results show that, for a given required thrust coefficient, proper blade design can noticeably reduce the noise produced at some expense to the power requirements.
Optimal fractional order PID design via Tabu Search based algorithm.
Ateş, Abdullah; Yeroglu, Celaleddin
2016-01-01
This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method. PMID:26652128
Generation of Compliant Mechanisms using Hybrid Genetic Algorithm
NASA Astrophysics Data System (ADS)
Sharma, D.; Deb, K.
2014-10-01
Compliant mechanism is a single piece elastic structure which can deform to perform the assigned task. In this work, compliant mechanisms are evolved using a constraint based bi-objective optimization formulation which requires one user defined parameter ( η). This user defined parameter limits a gap between a desired path and an actual path traced by the compliant mechanism. The non-linear and discrete optimization problems are solved using the hybrid Genetic Algorithm (GA) wherein domain specific initialization, two-dimensional crossover operator and repairing techniques are adopted. A bit-wise local search method is used with elitist non-dominated sorting genetic algorithm to further refine the compliant mechanisms. Parallel computations are performed on the master-slave architecture to reduce the computation time. A parametric study is carried out for η value which suggests a range to evolve topologically different compliant mechanisms. The applied and boundary conditions to the compliant mechanisms are considered the variables that are evolved by the hybrid GA. The post-analysis of results unveils that the complaint mechanisms are always supported at unique location that can evolve the non-dominated solutions.
Superspreading: mechanisms and molecular design.
Theodorakis, Panagiotis E; Müller, Erich A; Craster, Richard V; Matar, Omar K
2015-03-01
The intriguing ability of certain surfactant molecules to drive the superspreading of liquids to complete wetting on hydrophobic substrates is central to numerous applications that range from coating flow technology to enhanced oil recovery. Despite significant experimental efforts, the precise mechanisms underlying superspreading remain unknown to date. Here, we isolate these mechanisms by analyzing coarse-grained molecular dynamics simulations of surfactant molecules of varying molecular architecture and substrate affinity. We observe that for superspreading to occur, two key conditions must be simultaneously satisfied: the adsorption of surfactants from the liquid-vapor surface onto the three-phase contact line augmented by local bilayer formation. Crucially, this must be coordinated with the rapid replenishment of liquid-vapor and solid-liquid interfaces with surfactants from the interior of the droplet. This article also highlights and explores the differences between superspreading and conventional surfactants, paving the way for the design of molecular architectures tailored specifically for applications that rely on the control of wetting. PMID:25658859
Microgel mechanics in biomaterial design.
Saxena, Shalini; Hansen, Caroline E; Lyon, L Andrew
2014-08-19
The field of polymeric biomaterials has received much attention in recent years due to its potential for enhancing the biocompatibility of systems and devices applied to drug delivery and tissue engineering. Such applications continually push the definition of biocompatibility from relatively straightforward issues such as cytotoxicity to significantly more complex processes such as reducing foreign body responses or even promoting/recapitulating natural body functions. Hydrogels and their colloidal analogues, microgels, have been and continue to be heavily investigated as viable materials for biological applications because they offer numerous, facile avenues in tailoring chemical and physical properties to approach biologically harmonious integration. Mechanical properties in particular are recently coming into focus as an important manner in which biological responses can be altered. In this Account, we trace how mechanical properties of microgels have moved into the spotlight of research efforts with the realization of their potential impact in biologically integrative systems. We discuss early experiments in our lab and in others focused on synthetic modulation of particle structure at a rudimentary level for fundamental drug delivery studies. These experiments elucidated that microgel mechanics are a consequence of polymer network distribution, which can be controlled by chemical composition or particle architecture. The degree of deformability designed into the microgel allows for a defined response to an imposed external force. We have studied deformation in packed colloidal phases and in translocation events through confined pores; in all circumstances, microgels exhibit impressive deformability in response to their environmental constraints. Microgels further translate their mechanical properties when assembled in films to the properties of the bulk material. In particular, microgel films have been a large focus in our lab as building blocks for self
An optimal structural design algorithm using optimality criteria
NASA Technical Reports Server (NTRS)
Taylor, J. E.; Rossow, M. P.
1976-01-01
An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.
Understanding Mechanical Design with Respect to Manufacturability
NASA Technical Reports Server (NTRS)
Mondell, Skyler
2010-01-01
At the NASA Prototype Development Laboratory in Kennedy Space Center, Fl, several projects concerning different areas of mechanical design were undertaken in order to better understand the relationship between mechanical design and manufacturabiIity. The assigned projects pertained specifically to the NASA Space Shuttle, Constellation, and Expendable Launch Vehicle programs. During the work term, mechanical design practices relating to manufacturing processes were learned and utilized in order to obtain an understanding of mechanical design with respect to manufacturability.
A VLSI design concept for parallel iterative algorithms
NASA Astrophysics Data System (ADS)
Sun, C. C.; Götze, J.
2009-05-01
Modern VLSI manufacturing technology has kept shrinking down to the nanoscale level with a very fast trend. Integration with the advanced nano-technology now makes it possible to realize advanced parallel iterative algorithms directly which was almost impossible 10 years ago. In this paper, we want to discuss the influences of evolving VLSI technologies for iterative algorithms and present design strategies from an algorithmic and architectural point of view. Implementing an iterative algorithm on a multiprocessor array, there is a trade-off between the performance/complexity of processors and the load/throughput of interconnects. This is due to the behavior of iterative algorithms. For example, we could simplify the parallel implementation of the iterative algorithm (i.e., processor elements of the multiprocessor array) in any way as long as the convergence is guaranteed. However, the modification of the algorithm (processors) usually increases the number of required iterations which also means that the switch activity of interconnects is increasing. As an example we show that a 25×25 full Jacobi EVD array could be realized into one single FPGA device with the simplified μ-rotation CORDIC architecture.
A robust Feasible Directions algorithm for design synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1983-01-01
A nonlinear optimization algorithm is developed which combines the best features of the Method of Feasible Directions and the Generalized Reduced Gradient Method. This algorithm utilizes the direction-finding sub-problem from the Method of Feasible Directions to find a search direction which is equivalent to that of the Generalized Reduced Gradient Method, but does not require the addition of a large number of slack variables associated with inequality constraints. This method provides a core-efficient algorithm for the solution of optimization problems with a large number of inequality constraints. Further optimization efficiency is derived by introducing the concept of infrequent gradient calculations. In addition, it is found that the sensitivity of the optimum design to changes in the problem parameters can be obtained using this method without the need for second derivatives or Lagrange multipliers. A numerical example is given in order to demonstrate the efficiency of the algorithm and the sensitivity analysis.
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms
Garro, Beatriz A.; Vázquez, Roberto A.
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.
Garro, Beatriz A; Vázquez, Roberto A
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132
Design of synthetic biological logic circuits based on evolutionary algorithm.
Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei
2013-08-01
The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose. PMID:23919952
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Distributed genetic algorithms for the floorplan design problem
NASA Technical Reports Server (NTRS)
Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.
1991-01-01
Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.
OSPREY: Protein Design with Ensembles, Flexibility, and Provable Algorithms
Gainza, Pablo; Roberts, Kyle E.; Georgiev, Ivelin; Lilien, Ryan H.; Keedy, Daniel A.; Chen, Cheng-Yu; Reza, Faisal; Anderson, Amy C.; Richardson, David C.; Richardson, Jane S.; Donald, Bruce R.
2013-01-01
Summary We have developed a suite of protein redesign algorithms that improves realistic in silico modeling of proteins. These algorithms are based on three characteristics that make them unique: (1) improved flexibility of the protein backbone, protein side chains, and ligand to accurately capture the conformational changes that are induced by mutations to the protein sequence; (2) modeling of proteins and ligands as ensembles of low-energy structures to better approximate binding affinity; and (3) a globally-optimal protein design search, guaranteeing that the computational predictions are optimal with respect to the input model. Here, we illustrate the importance of these three characteristics. We then describe OSPREY, a protein redesign suite that implements our protein design algorithms. OSPREY has been used prospectively, with experimental validation, in several biomedically-relevant settings. We show in detail how OSPREY has been used to predict resistance mutations and explain why improved flexibility, ensembles, and provability are essential for this application. PMID:23422427
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
Design of PID-type controllers using multiobjective genetic algorithms.
Herreros, Alberto; Baeyens, Enrique; Perán, José R
2002-10-01
The design of a PID controller is a multiobjective problem. A plant and a set of specifications to be satisfied are given. The designer has to adjust the parameters of the PID controller such that the feedback interconnection of the plant and the controller satisfies the specifications. These specifications are usually competitive and any acceptable solution requires a tradeoff among them. An approach for adjusting the parameters of a PID controller based on multiobjective optimization and genetic algorithms is presented in this paper. The MRCD (multiobjective robust control design) genetic algorithm has been employed. The approach can be easily generalized to design multivariable coupled and decentralized PID loops and has been successfully validated for a large number of experimental cases. PMID:12398277
USING GENETIC ALGORITHMS TO DESIGN ENVIRONMENTALLY FRIENDLY PROCESSES
Genetic algorithm calculations are applied to the design of chemical processes to achieve improvements in environmental and economic performance. By finding the set of Pareto (i.e., non-dominated) solutions one can see how different objectives, such as environmental and economic ...
Designing an Algorithm Animation System To Support Instructional Tasks.
ERIC Educational Resources Information Center
Hamilton-Taylor, Ashley George; Kraemer, Eileen
2002-01-01
The authors are conducting a study of instructors teaching data structure and algorithm topics, with a focus on the use of diagrams and tracing. The results of this study are being used to inform the design of the Support Kit for Animation (SKA). This article describes a preliminary version of SKA, and possible usage scenarios. (Author/AEF)
Optimal design of plasmonic waveguide using multiobjective genetic algorithm
NASA Astrophysics Data System (ADS)
Jung, Jaehoon
2016-01-01
An approach for multiobjective optimal design of a plasmonic waveguide is presented. We use a multiobjective extension of a genetic algorithm to find the Pareto-optimal geometries. The design variables are the geometrical parameters of the waveguide. The objective functions are chosen as the figure of merit defined as the ratio between the propagation distance and effective mode size and the normalized coupling length between adjacent waveguides at the telecom wavelength of 1550 nm.
Optical design with the aid of a genetic algorithm.
van Leijenhorst, D C; Lucasius, C B; Thijssen, J M
1996-01-01
Natural evolution is widely accepted as being the process underlying the design and optimization of the sensory functions of biological organisms. Using a genetic algorithm, this process is extended to the automatic optimization and design of optical systems, e.g. as used in astronomical telescopes. The results of this feasibility study indicate that various types of aberrations can be corrected quickly and simultaneously, even on small computers. PMID:8924643
A new collage steganographic algorithm using cartoon design
NASA Astrophysics Data System (ADS)
Yi, Shuang; Zhou, Yicong; Pun, Chi-Man; Chen, C. L. Philip
2014-02-01
Existing collage steganographic methods suffer from low payload of embedding messages. To improve the payload while providing a high level of security protection to messages, this paper introduces a new collage steganographic algorithm using cartoon design. It embeds messages into the least significant bits (LSBs) of color cartoon objects, applies different permutations to each object, and adds objects to a cartoon cover image to obtain the stego image. Computer simulations and comparisons demonstrate that the proposed algorithm shows significantly higher capacity of embedding messages compared with existing collage steganographic methods.
Martinez-Canales, Monica L.; Heaphy, Robert; Gramacy, Robert B.; Taddy, Matt; Chiesa, Michael L.; Thomas, Stephen W.; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Trucano, Timothy Guy; Gray, Genetha Anne
2006-11-01
This project focused on research and algorithmic development in optimization under uncertainty (OUU) problems driven by earth penetrator (EP) designs. While taking into account uncertainty, we addressed three challenges in current simulation-based engineering design and analysis processes. The first challenge required leveraging small local samples, already constructed by optimization algorithms, to build effective surrogate models. We used Gaussian Process (GP) models to construct these surrogates. We developed two OUU algorithms using 'local' GPs (OUU-LGP) and one OUU algorithm using 'global' GPs (OUU-GGP) that appear competitive or better than current methods. The second challenge was to develop a methodical design process based on multi-resolution, multi-fidelity models. We developed a Multi-Fidelity Bayesian Auto-regressive process (MF-BAP). The third challenge involved the development of tools that are computational feasible and accessible. We created MATLAB{reg_sign} and initial DAKOTA implementations of our algorithms.
Mechanical verification of a schematic Byzantine clock synchronization algorithm
NASA Technical Reports Server (NTRS)
Shankar, Natarajan
1991-01-01
Schneider generalizes a number of protocols for Byzantine fault tolerant clock synchronization and presents a uniform proof for their correctness. The authors present a machine checked proof of this schematic protocol that revises some of the details in Schneider's original analysis. The verification was carried out with the EHDM system developed at the SRI Computer Science Laboratory. The mechanically checked proofs include the verification that the egocentric mean function used in Lamport and Melliar-Smith's Interactive Convergence Algorithm satisfies the requirements of Schneider's protocol.
Design of transonic airfoils and wings using a hybrid design algorithm
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Smith, Leigh A.
1987-01-01
A method has been developed for designing airfoils and wings at transonic speeds. It utilizes a hybrid design algorithm in an iterative predictor/corrector approach, alternating between analysis code and a design module. This method has been successfully applied to a variety of airfoil and wing design problems, including both transport and highly-swept fighter wing configurations. An efficient approach to viscous airfoild design and the effect of including static aeroelastic deflections in the wing design process are also illustrated.
Full design of fuzzy controllers using genetic algorithms
NASA Technical Reports Server (NTRS)
Homaifar, Abdollah; Mccormick, ED
1992-01-01
This paper examines the applicability of genetic algorithms (GA) in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.
A new algorithm for modeling friction in dynamic mechanical systems
NASA Technical Reports Server (NTRS)
Hill, R. E.
1988-01-01
A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
A disturbance based control/structure design algorithm
NASA Technical Reports Server (NTRS)
Mclaren, Mark D.; Slater, Gary L.
1989-01-01
Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.
Algorithm Of Revitalization Programme Design For Housing Estates
NASA Astrophysics Data System (ADS)
Ostańska, Anna
2015-09-01
Demographic problems, obsolescence of existing buildings, unstable economy, as well as misunderstanding of the mechanism that turn city quarters into areas in need for intervention result in the implementation of improvement measures that prove inadequate. The paper puts forward an algorithm of revitalization program for housing developments and presents its implementation. It also showed the effects of periodically run (10 years) three-way diagnostic tests in correlation with the concept of settlement management.
Designing a competent simple genetic algorithm for search and optimization
NASA Astrophysics Data System (ADS)
Reed, Patrick; Minsker, Barbara; Goldberg, David E.
2000-12-01
Simple genetic algorithms have been used to solve many water resources problems, but specifying the parameters that control how adaptive search is performed can be a difficult and time-consuming trial-and-error process. However, theoretical relationships for population sizing and timescale analysis have been developed that can provide pragmatic tools for vastly limiting the number of parameter combinations that must be considered. The purpose of this technical note is to summarize these relationships for the water resources community and to illustrate their practical utility in a long-term groundwater monitoring design application. These relationships, which model the effects of the primary operators of a simple genetic algorithm (selection, recombination, and mutation), provide a highly efficient method for ensuring convergence to near-optimal or optimal solutions. Application of the method to a monitoring design test case identified robust parameter values using only three trial runs.
Mechanical considerations and design skills.
Alvis, Robert L.
2008-03-01
The purpose of the report is to provide experienced-based insights into design processes that will benefit designers beginning their employment at Sandia National Laboratories or those assuming new design responsibilities. The main purpose of this document is to provide engineers with the practical aspects of system design. The material discussed here may not be new to some readers, but some of it was to me. Transforming an idea to a design to solve a problem is a skill, and skills are similar to history lessons. We gain these skills from experience, and many of us have not been fortunate enough to grow in an environment that provided the skills that we now need. I was fortunate to grow up on a farm where we had to learn how to maintain and operate several different kinds of engines and machines. If you are like me, my formal experience is partially based upon the two universities from which I graduated, where few practical applications of the technologies were taught. What was taught was mainly theoretical, and few instructors had practical experience to offer the students. I understand this, as students have their hands full just to learn the theoretical. The practical part was mainly left up to 'on the job experience'. However, I believe it is better to learn the practical applications early and apply them quickly 'on the job'. System design engineers need to know several technical things, both in and out of their field of expertise. An engineer is not expected to know everything, but he should know when to ask an expert for assistance. This 'expert' can be in any field, whether it is in analyses, drafting, machining, material properties, testing, etc. The best expert is a person who has practical experience in the area of needed information, and consulting with that individual can be the best and quickest way for one to learn. If the information provided here can improve your design skills and save one design from having a problem, save cost of development, or
Application of a genetic algorithm to wind turbine design
Selig, M.S.; Coverstone-Carroll, V.L.
1995-09-01
This paper presents an optimization procedure for stall-regulated horizontal-axis wind-turbines. A hybrid approach is used that combines the advantages of a genetic algorithm and an inverse design method. This method is used to determine the optimum blade pitch and blade chord and twist distributions that maximize the annual energy production. To illustrate the method, a family of 25 wind turbines was designed to examine the sensitivity of annual energy production to changes in the rotor blade length and peak rotor power. Trends are revealed that should aid in the design of new rotors for existing turbines. In the second application, a series of five wind turbines was designed to determine the benefits of specifically tailoring wind turbine blades for the average wind speed at a particular site. The results have important practical implications related to rotors designed for the Midwest versus those where the average wind speed may be greater.
Efficient computer algebra algorithms for polynomial matrices in control design
NASA Technical Reports Server (NTRS)
Baras, J. S.; Macenany, D. C.; Munach, R.
1989-01-01
The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.
Orthogonalizing EM: A design-based least squares algorithm
Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.
2016-01-01
We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558
Robust Optimization Design Algorithm for High-Frequency TWTs
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Chevalier, Christine T.
2010-01-01
Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.
Thrust vector control algorithm design for the Cassini spacecraft
NASA Technical Reports Server (NTRS)
Enright, Paul J.
1993-01-01
This paper describes a preliminary design of the thrust vector control algorithm for the interplanetary spacecraft, Cassini. Topics of discussion include flight software architecture, modeling of sensors, actuators, and vehicle dynamics, and controller design and analysis via classical methods. Special attention is paid to potential interactions with structural flexibilities and propellant dynamics. Controller performance is evaluated in a simulation environment built around a multi-body dynamics model, which contains nonlinear models of the relevant hardware and preliminary versions of supporting attitude determination and control functions.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and
An efficient parallel algorithm for accelerating computational protein design
Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang
2014-01-01
Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991
Algorithm development for the control design of flexible structures
NASA Technical Reports Server (NTRS)
Skelton, R. E.
1983-01-01
The critical problems associated with the control of highly damped flexible structures are outlined. The practical problems include: high performance; assembly in space, configuration changes; on-line controller software design; and lack of test data. Underlying all of these problems is the central problem of modeling errors. To justify the expense of a space structure, the performance requirements will necessarily be very severe. On the other hand, the absence of economical tests precludes the availability of reliable data before flight. A design algorithm is offered which: (1) provides damping for a larger number of modes than the optimal attitude controller controls; (2) coordinates the rate of feedback design with the attitude control design by use of a similar cost function; and (3) provides model reduction and controller reduction decisions which are systematically connected to the mathematical statement of the control objectives and the disturbance models.
Linear vs. function-based dose algorithm designs.
Stanford, N
2011-03-01
The performance requirements prescribed in IEC 62387-1, 2007 recommend linear, additive algorithms for external dosimetry [IEC. Radiation protection instrumentation--passive integrating dosimetry systems for environmental and personal monitoring--Part 1: General characteristics and performance requirements. IEC 62387-1 (2007)]. Neither of the two current standards for performance of external dosimetry in the USA address the additivity of dose results [American National Standards Institute, Inc. American National Standard for dosimetry personnel dosimetry performance criteria for testing. ANSI/HPS N13.11 (2009); Department of Energy. Department of Energy Standard for the performance testing of personnel dosimetry systems. DOE/EH-0027 (1986)]. While there are significant merits to adopting a purely linear solution to estimating doses from multi-element external dosemeters, differences in the standards result in technical as well as perception challenges in designing a single algorithm approach that will satisfy both IEC and USA external dosimetry performance requirements. The dosimetry performance testing standards in the USA do not incorporate type testing, but rely on biennial performance tests to demonstrate proficiency in a wide range of pure and mixed fields. The test results are used exclusively to judge the system proficiency, with no specific requirements on the algorithm design. Technical challenges include mixed beta/photon fields with a beta dose as low as 0.30 mSv mixed with 0.05 mSv of low-energy photons. Perception-based challenges, resulting from over 20 y of experience with this type of performance testing in the USA, include the common belief that the overall quality of the dosemeter performance can be judged from performance to pure fields. This paper presents synthetic testing results from currently accredited function-based algorithms and new developed purely linear algorithms. A comparison of the performance data highlights the benefits of each
Conceptual space Systems Design using Meta-Heuristic Algorithms
NASA Astrophysics Data System (ADS)
Kim, Byoungsoo; Morgenthaler, George W.
2002-01-01
easily and explicitly by new design-to-cost philosophy, "faster, better, cheaper" (fast-track, innovative, lower-cost, small-sat). The objective of the Space Systems Design has moved from maximizing space mission performance under weak time and cost constraints (almost regardless of cost) but with technology risk constraints, to maximizing mission goals under cost and schedule constraints but with prudent technology risk constraints, or maximizing space mission performance per unit cost. Within this mindset, Conceptual Space Systems Design models were formulated as constrained combinatorial optimization problems with estimated Total Mission Cost (TMC) as its objective function to be minimized and subsystems trade-offs as decision variables in its design space, using parametric estimating relationships (PERs) and cost estimating relationships (CERs).Here a constrained combinatorial optimized "solution" is defined as achieving the most favorable alternative for the system on the basis of the decision-making design criteria. Two non-traditional meta-heuristic optimization algorithms, Genetic Algorithms (GAs) and Simulated Annealing (SA), were used to solve the formulated combinatorial optimization model for the Conceptual Space Systems Design. GAs and SA were demonstrated on SAMPEX. The model simulation statistics show that the estimated TMCs obtained by GAs and SA are statistically equal and consistent. These statistics also show that Conceptual Space Systems Design Model can be used as a guidance tool to evaluate and validate space research proposals. Also, the non-traditional meta-heuristic constrained optimization techniques, GAs and SA, can be applied to all manner of space, civil or commercial design problems.
Hernández-Ocaña, Betania; Pozos-Parra, Ma. Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara
2016-01-01
This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem. PMID:27057156
Hernández-Ocaña, Betania; Pozos-Parra, Ma Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara
2016-01-01
This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem. PMID:27057156
Neural-network-biased genetic algorithms for materials design
NASA Astrophysics Data System (ADS)
Patra, Tarak; Meenakshisundaram, Venkatesh; Simmons, David
Machine learning tools have been progressively adopted by the materials science community to accelerate design of materials with targeted properties. However, in the search for new materials exhibiting properties and performance beyond that previously achieved, machine learning approaches are frequently limited by two major shortcomings. First, they are intrinsically interpolative. They are therefore better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require the availability of large datasets, which in some fields are not available and would be prohibitively expensive to produce. Here we describe a new strategy for combining genetic algorithms, neural networks and other machine learning tools, and molecular simulation to discover materials with extremal properties in the absence of pre-existing data. Predictions from progressively constructed machine learning tools are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct molecular dynamics simulation. We survey several initial materials design problems we have addressed with this framework and compare its performance to that of standard genetic algorithm approaches. We acknowledge the W. M. Keck Foundation for support of this work.
The Mechanical Design of Nacre
NASA Astrophysics Data System (ADS)
Jackson, A. P.; Vincent, J. F. V.; Turner, R. M.
1988-09-01
Mother-of-pearl (nacre) is a platelet-reinforced composite, highly filled with calcium carbonate (aragonite). The Young modulus, determined from beams of a span-to-depth ratio of no less than 15 (a necessary precaution), is of the order of 70 GPa (dry) and 60 GPa (wet), much higher than previously recorded values. These values can be derived from `shear-lag' models developed for platey composites, suggesting that nacre is a near-ideal material. The tensile strength of nacre is of the order of 170 MPa (dry) and 140 MPa (wet), values which are best modelled assuming that pull-out of the platelets is the main mode of failure. In three-point bending, depending on the span-to-depth ratio and degree of hydration, the work to fracture across the platelets varies from 350 to 1240 J m-2. In general, the effect of water is to increase the ductility of nacre and increase the toughness almost tenfold by the associated introduction of plastic work. The pull-out model is sufficient to account for the toughness of dry nacre, but accounts for only a third of the toughness of wet nacre. The additional contribution probably comes from debonding within the thin layer of matrix material. Electron microscopy reveals that the ductility of wet nacre is caused by cohesive fracture along platelet lamellae at right angles to the main crack. The matrix appears to be well bonded to the lamellae, enabling the matrix to be stretched across the delamination cracks without breaking, thereby sustaining a force across a wider crack. Such a mechanism also explains why toughness is dependent on the span-to-depth ratio of the test piece. With this last observation as a possible exception, nacre does not employ any really novel mechanisms to achieve its mechanical properties. It is simply `well made'. The importance of nacre to the mollusc depends both on the material and the size of the shell. Catastrophic failure will be very likely in whole, undamaged shells which behave like unnotched beams at
Direct simulation Monte Carlo method with a focal mechanism algorithm
NASA Astrophysics Data System (ADS)
Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung
2015-01-01
To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.
ARGOS laser system mechanical design
NASA Astrophysics Data System (ADS)
Deysenroth, M.; Honsberg, M.; Gemperlein, H.; Ziegleder, J.; Raab, W.; Rabien, S.; Barl, L.; Gässler, W.; Borelli, J. L.
2014-07-01
ARGOS, a multi-star adaptive optics system is designed for the wide-field imager and multi-object spectrograph LUCI on the LBT (Large Binocular Telescope). Based on Rayleigh scattering the laser constellation images 3 artificial stars (at 532 nm) per each of the 2 eyes of the LBT, focused at a height of 12 km (Ground Layer Adaptive Optics). The stars are nominally positioned on a circle 2' in radius, but each star can be moved by up to 0.5' in any direction. For all of these needs are following main subsystems necessary: 1. A laser system with its 3 Lasers (Nd:YAG ~18W each) for delivering strong collimated light as for LGS indispensable. 2. The Launch system to project 3 beams per main mirror as a 40 cm telescope to the sky. 3. The Wave Front Sensor with a dichroic mirror. 4. The dichroic mirror unit to grab and interpret the data. 5. A Calibration Unit to adjust the system independently also during day time. 6. Racks + platforms for the WFS units. 7. Platforms and ladders for a secure access. This paper should mainly demonstrate how the ARGOS Laser System is configured and designed to support all other systems.
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
Design principles and algorithms for automated air traffic management
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
1995-01-01
This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.
Conceptual space systems design using meta-heuristic algorithms
NASA Astrophysics Data System (ADS)
Kim, Byoungsoo
criteria. Two meta-heuristic optimization algorithms, Genetic Algorithms (GAs) and Simulated Annealing (SA), were used to optimize the formulated (simply bounded) Constrained Combinatorial Conceptual Space Systems Design Model. GAs and SA were demonstrated on the SAMPEX (Solar Anomalous & Magnetospheric Particle Explorer) Space System. The Conceptual Space Systems Design Model developed in this thesis can be used as an assessment tool to evaluate and validate Space System proposals.
Design Principles and Algorithms for Air Traffic Arrival Scheduling
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Itoh, Eri
2014-01-01
This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.
Drought Adaptation Mechanisms Should Guide Experimental Design.
Gilbert, Matthew E; Medina, Viviana
2016-08-01
The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism. PMID:27090148
Algorithm To Design Finite-Field Normal-Basis Multipliers
NASA Technical Reports Server (NTRS)
Wang, Charles C.
1988-01-01
Way found to exploit Massey-Omura multiplication algorithm. Generalized algorithm locates normal basis in Galois filed GF(2m) and enables development of another algorithm to construct product function.
Ishibuchi, Hisao; Sudo, Takahiko; Nojima, Yusuke
2016-01-01
In interactive evolutionary computation (IEC), each solution is evaluated by a human user. Usually the total number of examined solutions is very small. In some applications such as hearing aid design and music composition, only a single solution can be evaluated at a time by a human user. Moreover, accurate and precise numerical evaluation is difficult. Based on these considerations, we formulated an IEC model with the minimum requirement for fitness evaluation ability of human users under the following assumptions: They can evaluate only a single solution at a time, they can memorize only a single previous solution they have just evaluated, their evaluation result on the current solution is whether it is better than the previous one or not, and the best solution among the evaluated ones should be identified after a pre-specified number of evaluations. In this paper, we first explain our IEC model in detail. Next we propose a ([Formula: see text])ES-style algorithm for our IEC model. Then we propose an offline meta-level approach to automated algorithm design for our IEC model. The main feature of our approach is the use of a different mechanism (e.g., mutation, crossover, random initialization) to generate each solution to be evaluated. Through computational experiments on test problems, our approach is compared with the ([Formula: see text])ES-style algorithm where a solution generation mechanism is pre-specified and fixed throughout the execution of the algorithm. PMID:27026888
Application of Simulated Annealing and Related Algorithms to TWTA Design
NASA Technical Reports Server (NTRS)
Radke, Eric M.
2004-01-01
Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is
Evolutionary Design of Rule Changing Artificial Society Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Wu, Yun; Kanoh, Hitoshi
Socioeconomic phenomena, cultural progress and political organization have recently been studied by creating artificial societies consisting of simulated agents. In this paper we propose a new method to design action rules of agents in artificial society that can realize given requests using genetic algorithms (GAs). In this paper we propose an efficient method for designing the action rules of agents that will constitute an artificial society that meets a specified demand by using a GAs. In the proposed method, each chromosome in the GA population represents a candidate set of action rules and the number of rule iterations. While a conventional method applies distinct rules in order of precedence, the present method applies a set of rules repeatedly for a certain period. The present method is aiming at both firm evolution of agent population and continuous action by that. Experimental results using the artificial society proved that the present method can generate artificial society which fills a demand in high probability.
Optimal Design of RF Energy Harvesting Device Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Mori, T.; Sato, Y.; Adriano, R.; Igarashi, H.
2015-11-01
This paper presents optimal design of an RF energy harvesting device using genetic algorithm (GA). In the present RF harvester, a planar spiral antenna (PSA) is loaded with matching and rectifying circuits. On the first stage of the optimal design, the shape parameters of PSA are optimized using . Then, the equivalent circuit of the optimized PSA is derived for optimization of the circuits. Finally, the parameters of RF energy harvesting circuit are optimized to maximize the output power using GA. It is shown that the present optimization increases the output power by a factor of five. The manufactured energy harvester starts working when the input electric field is greater than 0.5 V/m.
Computational tools and algorithms for designing customized synthetic genes.
Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris
2014-01-01
Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050
Computational Tools and Algorithms for Designing Customized Synthetic Genes
Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris
2014-01-01
Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050
A Parallel Genetic Algorithm for Automated Electronic Circuit Design
NASA Technical Reports Server (NTRS)
Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris
2000-01-01
Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency
Mechanical design of the Mars Pathfinder mission
NASA Technical Reports Server (NTRS)
Eisen, Howard Jay; Buck, Carl W.; Gillis-Smith, Greg R.; Umland, Jeffrey W.
1997-01-01
The Mars Pathfinder mission and the Sojourner rover is reported on, with emphasis on the various mission steps and the performance of the technologies involved. The mechanical design of mission hardware was critical to the success of the entry sequence and the landing operations. The various mechanisms employed are considered.
Mechanical design aspects of the HYVAX railgun
NASA Astrophysics Data System (ADS)
Fox, W. E.; Cummings, C. E.; Davidson, R. F.; Parker, J. V.
1984-03-01
The hypervelocity experiment (HYVAX) railgun is to produce projectile velocities greater than 15 km/s in a 13-m-long, round bore gun. The HYVAX railgun represents a sophisticated, state-of-the-art electromagnetic launcher (EML), which should provide significant information on the performance of these devices. The railgun consists of two kidney-shaped rails adjoining two insulators. Critical mechanical design considerations are related to the minimization of plasma penetration between the rails and insulators to prevent shorting, the minimization of bore deformation under magnetic loading, and the minimization of component stresses at expected loads. Attention is given to design criteria, mechanical design problems, and assembly and alignment.
Geometric methods for the design of mechanisms
NASA Astrophysics Data System (ADS)
Stokes, Ann Westagard
1993-01-01
Challenges posed by the process of designing robotic mechanisms have provided a new impetus to research in the classical subjects of kinematics, elastic analysis, and multibody dynamics. Historically, mechanism designers have considered these areas of analysis to be generally separate and distinct sciences. However, there are significant classes of problems which require a combination of these methods to arrive at a satisfactory solution. For example, both the compliance and the inertia distribution strongly influence the performance of a robotic manipulator. In this thesis, geometric methods are applied to the analysis of mechanisms where kinematics, elasticity, and dynamics play fundamental and interactive roles. Tools for the mathematical analysis, design, and optimization of a class of holonomic and nonholonomic mechanisms are developed. Specific contributions of this thesis include a network theory for elasto-kinematic systems. The applicability of the network theory is demonstrated by employing it to calculate the optimal distribution of joint compliance in a serial manipulator. In addition, the advantage of applying Lie group theoretic approaches to mechanisms requiring specific dynamic properties is demonstrated by extending Brockett's product of exponentials formula to the domain of dynamics. Conditions for the design of manipulators having inertia matrices which are constant in joint angle coordinates are developed. Finally, analysis and design techniques are developed for a class of mechanisms which rectify oscillations into secular motions. These techniques are applied to the analysis of free-floating chains that can reorient themselves in zero angular momentum processes and to the analysis of rattleback tops.
Mechanism design of continuous infrared lens
NASA Astrophysics Data System (ADS)
Su, Yan-qin; Zhang, Jing-xu; Lv, Tian-yu; Yang, Fei; Wang, Fu-guo
2013-09-01
With the development of infrared technology and material, infrared zoom system is playing an important role in the field of photoelectric observation, the demand of infrared systems is increasing rapidly. In order to satisfy the requirement of infrared tracking imaging requirements of a car optoelectronic devices, different kinds of mechanical structure has been discussed, finally, according to the character of the optical design result, cam mechanism is adopted in zoom mechanism design, ball screw has been used in focusing mechanism design. As is known to all, cam is the key part in zoom system, the static, dynamic and thermal characteristics of the cam make great effect on the system performance because of the greater impact of the car's shaking and a larger range of temperature changes, as a result, the FEM analysis is necessary. The static performance is all right obtained by the finite element analysis results, the cam's first -order natural frequency is 97.56 Hz by modal analysis, the deformation of cam in the temperature difference of 80 °C is no more than 0. 003 mm by thermal analysis, which means the mechanical performance of the cam is fine. at last, the focusing mechanism has been designed, and analysis of focusing mechanism precision and encoder theoretical resolving power has been done, this mechanism has the advantages of simple transmission chain and low friction, as well as reducing the transmission error, an absolute encoder is chosen to detect the displacement of the focusing mechanism, the focusing precision is 5μm, the encoder theoretical resolving power is 0.015μm. In addition, the measurements on how to suppress stray radiation have been put forward. The experiment afterward showed that the infrared zoom system performs well, which provides lot of experience in infrared zoom system design and adjustment.
Plant Stems: Functional Design and Mechanics
NASA Astrophysics Data System (ADS)
Speck, Thomas; Burgert, Ingo
2011-08-01
Plant stems are one of nature's most impressive mechanical constructs. Their sophisticated hierarchical structure and multifunctionality allow trees to grow more than 100 m tall. This review highlights the advanced mechanical design of plant stems from the integral level of stem structures down to the fiber-reinforced-composite character of the cell walls. Thereby we intend not only to provide insight into structure-function relationships at the individual levels of hierarchy but to further discuss how growth forms and habits of plant stems are closely interrelated with the peculiarities of their tissue and cell structure and mechanics. This concept is extended to a further key feature of plants, namely, adaptive growth as a reaction to mechanical perturbation and/or changing environmental conditions. These mechanical design principles of plant stems can serve as concept generators for advanced biomimetic materials and may inspire materials and engineering sciences research.
Algorithm design for a gun simulator based on image processing
NASA Astrophysics Data System (ADS)
Liu, Yu; Wei, Ping; Ke, Jun
2015-08-01
In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Controller design based on μ analysis and PSO algorithm.
Lari, Ali; Khosravi, Alireza; Rajabi, Farshad
2014-03-01
In this paper an evolutionary algorithm is employed to address the controller design problem based on μ analysis. Conventional solutions to μ synthesis problem such as D-K iteration method often lead to high order, impractical controllers. In the proposed approach, a constrained optimization problem based on μ analysis is defined and then an evolutionary approach is employed to solve the optimization problem. The goal is to achieve a more practical controller with lower order. A benchmark system named two-tank system is considered to evaluate performance of the proposed approach. Simulation results show that the proposed controller performs more effective than high order H(∞) controller and has close responses to the high order D-K iteration controller as the common solution to μ synthesis problem. PMID:24314832
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
The design of aerial camera focusing mechanism
NASA Astrophysics Data System (ADS)
Hu, Changchang; Yang, Hongtao; Niu, Haijun
2015-10-01
In order to ensure the imaging resolution of aerial camera and compensating defocusing caused by the changing of atmospheric temperature, pressure, oblique photographing distance and other environmental factor [1,2], and to meeting the overall design requirements of the camera for the lower mass and smaller size , the linear focusing mechanism is designed. Through the target surface support, the target surface component is connected with focusing driving mechanism. Make use of precision ball screws, focusing mechanism transforms the input rotary motion of motor into linear motion of the focal plane assembly. Then combined with the form of linear guide restraint movement, the magnetic encoder is adopted to detect the response of displacement. And the closed loop control is adopted to realize accurate focusing. This paper illustrated the design scheme for a focusing mechanism and analyzed its error sources. It has the advantages of light friction and simple transmission chain and reducing the transmission error effectively. And this paper also analyses the target surface by finite element analysis and lightweight design. Proving that the precision of focusing mechanism can achieve higher than 3um, and the focusing range is +/-2mm.
Microfluidic serpentine antennas with designed mechanical tunability.
Huang, YongAn; Wang, Yezhou; Xiao, Lin; Liu, Huimin; Dong, Wentao; Yin, Zhouping
2014-11-01
This paper describes the design and characterization of microfluidic serpentine antennas with reversible stretchability and designed mechanical frequency modulation (FM). The microfluidic antennas are designed based on the Poisson's ratio of the elastomer in which the liquid alloy antenna is embedded, to controllably decrease, stabilize or increase its resonance frequency when being stretched. Finite element modelling was used in combination with experimental verification to investigate the effects of substrate dimensions and antenna aspect ratios on the FM sensitivity to uniaxial stretching. It could be designed within the range of -1.2 to 0.6 GHz per 100% stretch. When the aspect ratio of the serpentine antenna is between 1.0 and 1.5, the resonance frequency is stable under stretching, bending, and twisting. The presented microfluidic serpentine antenna design could be utilized in the field of wireless mobile communication for the design of wearable electronics, with a stable resonance frequency under dynamic applied strain up to 50%. PMID:25144304
Mars rover mechanisms designed for Rocky 4
NASA Technical Reports Server (NTRS)
Rivellini, Tommaso P.
1993-01-01
A Mars rover prototype vehicle named Rocky 4 was designed and built at JPL during the fall of 1991 and spring 1992. This vehicle is the fourth in a series of rovers designed to test vehicle mobility and navigation software. Rocky 4 was the first attempt to design a vehicle with 'flight like' mass and functionality. It was consequently necessary to develop highly efficient mechanisms and structures to meet the vehicles very tight mass limit of 3 Kg for the entire mobility system (7 Kg for the full system). This paper will discuss the key mechanisms developed for the rover's innovative drive and suspension system. These are the wheel drive and strut assembly, the rocker-bogie suspension mechanism and the differential pivot. The end-to-end design, analysis, fabrication and testing of these components will also be discussed as will their performance during field testing. The lessons learned from Rocky 4 are already proving invaluable for the design of Rocky 6. Rocky 6 is currently being designed to fly on NASA's MESUR mission to Mars scheduled to launch in 1996.
Design of infrasound-detection system via adaptive LMSTDE algorithm
NASA Technical Reports Server (NTRS)
Khalaf, C. S.; Stoughton, J. W.
1984-01-01
A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.
Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm
Svečko, Rajko
2014-01-01
This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749
A homogeneous superconducting magnet design using a hybrid optimization algorithm
NASA Astrophysics Data System (ADS)
Ni, Zhipeng; Wang, Qiuliang; Liu, Feng; Yan, Luguang
2013-12-01
This paper employs a hybrid optimization algorithm with a combination of linear programming (LP) and nonlinear programming (NLP) to design the highly homogeneous superconducting magnets for magnetic resonance imaging (MRI). The whole work is divided into two stages. The first LP stage provides a global optimal current map with several non-zero current clusters, and the mathematical model for the LP was updated by taking into account the maximum axial and radial magnetic field strength limitations. In the second NLP stage, the non-zero current clusters were discretized into practical solenoids. The superconducting conductor consumption was set as the objective function both in the LP and NLP stages to minimize the construction cost. In addition, the peak-peak homogeneity over the volume of imaging (VOI), the scope of 5 Gauss fringe field, and maximum magnetic field strength within superconducting coils were set as constraints. The detailed design process for a dedicated 3.0 T animal MRI scanner was presented. The homogeneous magnet produces a magnetic field quality of 6.0 ppm peak-peak homogeneity over a 16 cm by 18 cm elliptical VOI, and the 5 Gauss fringe field was limited within a 1.5 m by 2.0 m elliptical region.
Orbit design and estimation for surveillance missions using genetic algorithms
NASA Astrophysics Data System (ADS)
Abdelkhalik, Osama Mohamed Omar
2005-11-01
The problem of observing a given set of Earth target sites within an assigned time frame is examined. Attention is given mainly to visiting these sites as sub-satellite nadir points. Solutions to this problem in the literature require thrusters to continuously maneuver the satellite from one site to another. A natural solution is proposed. A natural solution is a gravitational orbit that enables the spacecraft to satisfy the mission requirements without maneuvering. Optimization of a penalty function is performed to find natural solutions for satellite orbit configurations. This penalty function depends on the mission objectives. Two mission objectives are considered: maximum observation time and maximum resolution. The penalty function poses multi minima and a genetic algorithm technique is used to solve this problem. In the case that there is no one orbit satisfying the mission requirements, a multi-orbit solution is proposed. In a multi-orbit solution, the set of target sites is split into two groups. Then the developed algorithm is used to search for a natural solution for each group. The satellite has to be maneuvered between the two solution orbits. Genetic algorithms are used to find the optimal orbit transfer between the two orbits using impulsive thrusters. A new formulation for solving the orbit maneuver problem using genetic algorithms is developed. The developed formulation searches for a minimum fuel consumption maneuver and guarantees that the satellite will be transferred exactly to the final orbit even if the solution is non-optimal. The results obtained demonstrate the feasibility of finding natural solutions for many case studies. The problem of the design of suitable satellite constellation for Earth observing applications is addressed. Two cases are considered. The first is the remote sensing missions for a particular region with high frequency and small swath width. The second is the interferometry radar Earth observation missions. In satellite
Novel design solutions for fishing reel mechanisms
NASA Astrophysics Data System (ADS)
Lovasz, Erwin-Christian; Modler, Karl-Heinz; Neumann, Rudolf; Gruescu, Corina Mihaela; Perju, Dan; Ciupe, Valentin; Maniu, Inocentiu
2015-07-01
Currently, there are various reels on the market regarding the type of mechanism, which achieves the winding and unwinding of the line. The designers have the purpose of obtaining a linear transmission function, by means of a simple and small-sized mechanism. However, the present solutions are not satisfactory because of large deviations from linearity of the transmission function and complexity of mechanical schema. A novel solution for the reel spool mechanism is proposed. Its kinematic schema and synthesis method are described. The kinematic schema of the chosen mechanism is based on a noncircular gear in series with a scotch-yoke mechanism. The yoke is driven by a stud fixed on the driving noncircular gear. The drawbacks of other models regarding the effects occurring at the ends of the spool are eliminated through achieving an appropriate transmission function of the spool. The linear function approximation with curved end-arches appropriately computed to ensure mathematical continuity is very good. The experimental results on the mechanism model validate the theoretical approach. The developed mechanism solution is recorded under a reel spool mechanism patent.
Combinatorial design of textured mechanical metamaterials.
Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin
2016-07-28
The structural complexity of metamaterials is limitless, but, in practice, most designs comprise periodic architectures that lead to materials with spatially homogeneous features. More advanced applications in soft robotics, prosthetics and wearable technology involve spatially textured mechanical functionality, which requires aperiodic architectures. However, a naive implementation of such structural complexity invariably leads to geometrical frustration (whereby local constraints cannot be satisfied everywhere), which prevents coherent operation and impedes functionality. Here we introduce a combinatorial strategy for the design of aperiodic, yet frustration-free, mechanical metamaterials that exhibit spatially textured functionalities. We implement this strategy using cubic building blocks-voxels-that deform anisotropically, a local stacking rule that allows cooperative shape changes by guaranteeing that deformed building blocks fit together as in a three-dimensional jigsaw puzzle, and three-dimensional printing. These aperiodic metamaterials exhibit long-range holographic order, whereby the two-dimensional pixelated surface texture dictates the three-dimensional interior voxel arrangement. They also act as programmable shape-shifters, morphing into spatially complex, but predictable and designable, shapes when uniaxially compressed. Finally, their mechanical response to compression by a textured surface reveals their ability to perform sensing and pattern analysis. Combinatorial design thus opens up a new avenue towards mechanical metamaterials with unusual order and machine-like functionalities. PMID:27466125
Design considerations for mechanical face seals
NASA Technical Reports Server (NTRS)
Ludwig, L. P.; Greiner, H. F.
1980-01-01
Two companion reports deal with design considerations for improving performance of mechanical face seals, one of family of devices used in general area of fluid sealing of rotating shafts. One report deals with basic seal configuration and other with lubrication of seal.
Combinatorial design of textured mechanical metamaterials
NASA Astrophysics Data System (ADS)
Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin
2016-07-01
The structural complexity of metamaterials is limitless, but, in practice, most designs comprise periodic architectures that lead to materials with spatially homogeneous features. More advanced applications in soft robotics, prosthetics and wearable technology involve spatially textured mechanical functionality, which requires aperiodic architectures. However, a naive implementation of such structural complexity invariably leads to geometrical frustration (whereby local constraints cannot be satisfied everywhere), which prevents coherent operation and impedes functionality. Here we introduce a combinatorial strategy for the design of aperiodic, yet frustration-free, mechanical metamaterials that exhibit spatially textured functionalities. We implement this strategy using cubic building blocks—voxels—that deform anisotropically, a local stacking rule that allows cooperative shape changes by guaranteeing that deformed building blocks fit together as in a three-dimensional jigsaw puzzle, and three-dimensional printing. These aperiodic metamaterials exhibit long-range holographic order, whereby the two-dimensional pixelated surface texture dictates the three-dimensional interior voxel arrangement. They also act as programmable shape-shifters, morphing into spatially complex, but predictable and designable, shapes when uniaxially compressed. Finally, their mechanical response to compression by a textured surface reveals their ability to perform sensing and pattern analysis. Combinatorial design thus opens up a new avenue towards mechanical metamaterials with unusual order and machine-like functionalities.
Augmented Lagrangian Particle Swarm Optimization in Mechanism Design
NASA Astrophysics Data System (ADS)
Sedlaczek, Kai; Eberhard, Peter
The problem of optimizing nonlinear multibody systems is in general nonlinear and nonconvex. This is especially true for the dimensional synthesis process of rigid body mechanisms, where often only local solutions might be found with gradient-based optimization methods. An attractive alternative for solving such multimodal optimization problems is the Particle Swarm Optimization (PSO) algorithm. This stochastic solution technique allows a derivative-free search for a global solution without the need for any initial design. In this work, we present an extension to the basic PSO algorithm in order to solve the problem of dimensional synthesis with nonlinear equality and inequality constraints. It utilizes the Augmented Lagrange Multiplier Method in combination with an advanced non-stationary penalty function approach that does not rely on excessively large penalty factors for sufficiently accurate solutions. Although the PSO method is even able to solve nonsmooth and discrete problems, this augmented algorithm can additionally calculate accurate Lagrange multiplier estimates for differentiable formulations, which are helpful in the analysis process of the optimization results. We demonstrate this method and show its very promising applicability to the constrained dimensional synthesis process of rigid body mechanisms.
Design definition of a mechanical capacitor
NASA Technical Reports Server (NTRS)
Michaelis, T. D.; Schlieban, E. W.; Scott, R. D.
1977-01-01
A design study and analyses of a 10 kW-hr, 15 kW mechanical capacitor system was studied. It was determined that magnetically supported wheels constructed of advanced composites have the potential for high energy density and high power density. Structural concepts are analyzed that yield the highest energy density of any structural design yet reported. Particular attention was paid to the problem of 'friction' caused by magnetic and I to the second power R losses in the suspension and motor-generator subsystems, and low design friction levels have been achieved. The potentially long shelf life of this system, and the absence of wearing parts, provide superior performance over conventional flywheels supported with mechanical bearings. Costs and economies of energy storage wheels were reviewed briefly.
Designing a Micro-Mechanical Transistor
Mainieri, R.
1999-06-03
This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Micro-mechanical electronic systems are chips with moving parts. They are fabricated with the same techniques that are used to manufacture electronic chips, sharing their low cost. Micro-mechanical chips can also contain electronic components. By combining mechanical parts with electronic parts it becomes possible to process signal mechanically. To achieve designs comparable to those obtained with electronic components it is necessary to have a mechanical device that can change its behavior in response to a small input - a mechanical transistor. The work proposed will develop the design tools for these complex-shaped resonant structures using the geometrical ray technique. To overcome the limitations of geometrical ray chaos, the dynamics of the rays will be studied using the methods developed for the study of nonlinear dynamical systems. T his leads to numerical methods that execute well in parallel computer architectures, using a limited amount of memory and no inter-process communication.
Lapidoth, Gideon D.; Baran, Dror; Pszolla, Gabriele M.; Norn, Christoffer; Alon, Assaf; Tyka, Michael D.; Fleishman, Sarel J.
2016-01-01
Computational design of protein function has made substantial progress, generating new enzymes, binders, inhibitors, and nanomaterials not previously seen in nature. However, the ability to design new protein backbones for function – essential to exert control over all polypeptide degrees of freedom – remains a critical challenge. Most previous attempts to design new backbones computed the mainchain from scratch. Here, instead, we describe a combinatorial backbone and sequence optimization algorithm called AbDesign, which leverages the large number of sequences and experimentally determined molecular structures of antibodies to construct new antibody models, dock them against target surfaces and optimize their sequence and backbone conformation for high stability and binding affinity. We used the algorithm to produce antibody designs that target the same molecular surfaces as nine natural, high-affinity antibodies; in six the backbone conformation at the core of the antibody binding surface is similar to the natural antibody targets, and in several cases sequence and sidechain conformations recapitulate those seen in the natural antibodies. In the case of an anti-lysozyme antibody, designed antibody CDRs at the periphery of the interface, such as L1 and H2, show a greater backbone conformation diversity than the CDRs at the core of the interface, and increase the binding surface area compared to the natural antibody, which could enhance affinity and specificity. PMID:25670500
AHTR Mechanical, Structural, And Neutronic Preconceptual Design
Varma, Venugopal Koikal; Holcomb, David Eugene; Peretz, Fred J; Bradley, Eric Craig; Ilas, Dan; Qualls, A L; Zaharia, Nathaniel M
2012-10-01
This report provides an overview of the mechanical, structural, and neutronic aspects of the Advanced High Temperature Reactor (AHTR) design concept. The AHTR is a design concept for a large output Fluoride salt cooled High-temperature Reactor (FHR) that is being developed to enable evaluation of the technology hurdles remaining to be overcome prior to FHRs becoming a commercial reactor class. This report documents the incremental AHTR design maturation performed over the past year and is focused on advancing the design concept to a level of a functional, self-consistent system. The AHTR employs plate type coated particle fuel assemblies with rapid, off-line refueling. Neutronic analysis of the core has confirmed the viability of a 6-month 2-batch cycle with 9 weight-percent enriched uranium fuel. Refueling is intended to be performed automatically under visual guidance using dedicated robotic manipulators. The present design intent is for used fuel to be stored inside of containment for at least 6 months and then transferred to local dry wells for intermediate term, on-site storage. The mechanical and structural concept development effort has included an emphasis on transportation and constructability to minimize construction costs and schedule. The design intent is that all components be factory fabricated into rail transportable modules that are assembled into subsystems at an on-site workshop prior to being lifted into position using a heavy-lift crane in an open-top style construction. While detailed accident identification and response sequence analysis has yet to be performed, the design concept incorporates multiple levels of radioactive material containment including fully passive responses to all identified design basis or non-very-low frequency beyond design basis accidents. Key building design elements include: 1) below grade siting to minimize vulnerability to aircraft impact, 2) multiple natural circulation decay heat rejection chimneys, 3) seismic
NASA Astrophysics Data System (ADS)
Kanagaraj, G.; Ponnambalam, S. G.; Jawahar, N.; Mukund Nilakantan, J.
2014-10-01
This article presents an effective hybrid cuckoo search and genetic algorithm (HCSGA) for solving engineering design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables. The proposed algorithm, HCSGA, is first applied to 13 standard benchmark constrained optimization functions and subsequently used to solve three well-known design problems reported in the literature. The numerical results obtained by HCSGA show competitive performance with respect to recent algorithms for constrained design optimization problems.
Global and Local Optimization Algorithms for Optimal Signal Set Design
Kearsley, Anthony J.
2001-01-01
The problem of choosing an optimal signal set for non-Gaussian detection was reduced to a smooth inequality constrained mini-max nonlinear programming problem by Gockenbach and Kearsley. Here we consider the application of several optimization algorithms, both global and local, to this problem. The most promising results are obtained when special-purpose sequential quadratic programming (SQP) algorithms are embedded into stochastic global algorithms.
NASA Astrophysics Data System (ADS)
Strzałka, Dominik; Grabowski, Franciszek
Tsallis entropy introduced in 1988 is considered to have obtained new possibilities to construct generalized thermodynamical basis for statistical physics expanding classical Boltzmann-Gibbs thermodynamics for nonequilibrium states. During the last two decades this q-generalized theory has been successfully applied to considerable amount of physically interesting complex phenomena. The authors would like to present a new view on the problem of algorithms computational complexity analysis by the example of the possible thermodynamical basis of the sorting process and its dynamical behavior. A classical approach to the analysis of the amount of resources needed for algorithmic computation is based on the assumption that the contact between the algorithm and the input data stream is a simple system, because only the worst-case time complexity is considered to minimize the dependency on specific instances. Meanwhile the article shows that this process can be governed by long-range dependencies with thermodynamical basis expressed by the specific shapes of probability distributions. The classical approach does not allow to describe all properties of processes (especially the dynamical behavior of algorithms) that can appear during the computer algorithmic processing even if one takes into account the average case analysis in computational complexity. The importance of this problem is still neglected especially if one realizes two important things. The first one: nowadays computer systems work also in an interactive mode and for better understanding of its possible behavior one needs a proper thermodynamical basis. The second one: computers from mathematical point of view are Turing machines but in reality they have physical implementations that need energy for processing and the problem of entropy production appears. That is why the thermodynamical analysis of the possible behavior of the simple insertion sort algorithm will be given here.
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro
2010-12-01
The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.
Computational Design of Animated Mechanical Characters
NASA Astrophysics Data System (ADS)
Coros, Stelian; Thomaszewski, Bernhard; DRZ Team Team
2014-03-01
A factor key to the appeal of modern CG movies and video-games is that the virtual worlds they portray place no bounds on what can be imagined. Rapid manufacturing devices hold the promise of bringing this type of freedom to our own world, by enabling the fabrication of physical objects whose appearance, deformation behaviors and motions can be precisely specified. In order to unleash the full potential of this technology however, computational design methods that create digital content suitable for fabrication need to be developed. In recent work, we presented a computational design system that allows casual users to create animated mechanical characters. Given an articulated character as input, the user designs the animated character by sketching motion curves indicating how they should move. For each motion curve, our framework creates an optimized mechanism that reproduces it as closely as possible. The resulting mechanisms are attached to the character and then connected to each other using gear trains, which are created in a semi-automated fashion. The mechanical assemblies generated with our system can be driven with a single input driver, such as a hand-operated crank or an electric motor, and they can be fabricated using rapid prototyping devices.
Sampling design for classifying contaminant level using annealing search algorithms
NASA Astrophysics Data System (ADS)
Christakos, George; Killam, Bart R.
1993-12-01
A stochastic method for sampling spatially distributed contaminant level is presented. The purpose of sampling is to partition the contaminated region into zones of high and low pollutant concentration levels. In particular, given an initial set of observations of a contaminant within a site, it is desired to find a set of additional sampling locations in a way that takes into consideration the spatial variability characteristics of the site and optimizes certain objective functions emerging from the physical, regulatory and monetary considerations of the specific site cleanup process. Since the interest is in classifying the domain into zones above and below a pollutant threshold level, a natural criterion is the cost of misclassification. The resulting objective function is the expected value of a spatial loss function associated with sampling. Stochastic expectation involves the joint probability distribution of the pollutant level and its estimate, where the latter is calculated by means of spatial estimation techniques. Actual computation requires the discretization of the contaminated domain. As a consequence, any reasonably sized problem results in combinatorics precluding an exhaustive search. The use of an annealing algorithm, although suboptimal, can find a good set of future sampling locations quickly and efficiently. In order to obtain insight about the parameters and the computational requirements of the method, an example is discussed in detail. The implementation of spatial sampling design in practice will provide the model inputs necessary for waste site remediation, groundwater management, and environmental decision making.
NASA Astrophysics Data System (ADS)
Tancret, F.
2013-06-01
A new alloy design procedure is proposed, combining in a single computational tool several modelling and predictive techniques that have already been used and assessed in the field of materials science and alloy design: a genetic algorithm is used to optimize the alloy composition for target properties and performance on the basis of the prediction of mechanical properties (estimated by Gaussian process regression of data on existing alloys) and of microstructural constitution, stability and processability (evaluated by computational themodynamics). These tools are integrated in a unique Matlab programme. An example is given in the case of the design of a new nickel-base superalloy for future power plant applications (such as the ultra-supercritical (USC) coal-fired plant, or the high-temperature gas-cooled nuclear reactor (HTGCR or HTGR), where the selection criteria include cost, oxidation and creep resistance around 750 °C, long-term stability at service temperature, forgeability, weldability, etc.
Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1991-01-01
The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.
Mechanical engineering capstone senior design textbook
NASA Astrophysics Data System (ADS)
Barrett, Rolin Farrar, Jr.
This textbook is intended to bridge the gap between mechanical engineering equations and mechanical engineering design. To that end, real-world examples are used throughout the book. Also, the material is presented in an order that follows the chronological sequence of coursework that must be performed by a student in the typical capstone senior design course in mechanical engineering. In the process of writing this book, the author surveyed the fifty largest engineering schools (as ranked by the American Society of Engineering Education, or ASEE) to determine what engineering instructors are looking for in a textbook. The survey results revealed a clear need for a textbook written expressly for the capstone senior design course as taught throughout the nation. This book is designed to meet that need. This text was written using an organizational method that the author calls the General Topics Format. The format gives the student reader rapid access to the information contained in the text. All manufacturing methods, and some other material presented in this text, have been presented using the General Topics Format. The text uses examples to explain the importance of understanding the environment in which the product will be used and to discuss product abuse. The safety content contained in this text is unique. The Safety chapter teaches engineering ethics and includes a step-by-step guide to resolving ethical conflicts. The chapter includes explanations of rules, recommendations, standards, consensus standards, key safety concepts, and the legal implications of product failure. Key design principles have been listed and explained. The text provides easy-to-follow design steps, helpful for both the student and new engineer. Prototyping is presented as consisting of three phases: organization, building, and refining. A chapter on common manufacturing methods is included for reference.
AHTR Mechanical, Structural, and Neutronic Preconceptual Design
Varma, V.K.; Holcomb, D.E.; Peretz, F.J.; Bradley, E.C.; Ilas, D.; Qualls, A.L.; Zaharia, N.M.
2012-09-15
This report provides an overview of the mechanical, structural, and neutronic aspects of the Advanced High Temperature Reactor (AHTR) design concept. The AHTR is a design concept for a large output Fluoride salt cooled High-temperature Reactor (FHR) that is being developed to enable evaluation of the technology hurdles remaining to be overcome prior to FHRs becoming an option for commercial reactor deployment. This report documents the incremental AHTR design maturation performed over the past year and is focused on advancing the design concept to a level of a functional, self-consistent system. The reactor concept development remains at a preconceptual level of maturity. While the overall appearance of an AHTR design is anticipated to be similar to the current concept, optimized dimensions will differ from those presented here. The AHTR employs plate type coated particle fuel assemblies with rapid, off-line refueling. Neutronic analysis of the core has confirmed the viability of a 6-month two-batch cycle with 9 wt. % enriched uranium fuel. Refueling is intended to be performed automatically under visual guidance using dedicated robotic manipulators. The report includes a preconceptual design of the manipulators, the fuel transfer system, and the used fuel storage system. The present design intent is for used fuel to be stored inside of containment for at least six months and then transferred to local dry wells for intermediate term, on-site storage. The mechanical and structural concept development effort has included an emphasis on transportation and constructability to minimize construction costs and schedule. The design intent is that all components be factory fabricated into rail transportable modules that are assembled into subsystems at an on-site workshop prior to being lifted into position using a heavy-lift crane in an open-top style construction. While detailed accident identification and response sequence analysis has yet to be performed, the design
Mechanical design of SERT 2 thruster system
NASA Technical Reports Server (NTRS)
Zavesky, R. J.; Hurst, E. B.
1972-01-01
The mechanical design of the mercury bombardment thruster that was tested on SERT is described. The report shows how the structural, thermal, electrical, material compatibility, and neutral mercury coating considerations affected the design and integration of the subsystems and components. The SERT 2 spacecraft with two thrusters was launched on February 3, 1970. One thruster operated for 3782 hours and the other for 2011 hours. A high voltage short resulting from buildup of loose eroded material was believed to be the cause of failure.
Designing Stochastic Optimization Algorithms for Real-world Applications
NASA Astrophysics Data System (ADS)
Someya, Hiroshi; Handa, Hisashi; Koakutsu, Seiichi
This article presents a review of recent advances in stochastic optimization algorithms. Novel algorithms achieving highly adaptive and efficient searches, theoretical analyses to deepen our understanding of search behavior, successful implementation on parallel computers, attempts to build benchmark suites for industrial use, and techniques applied to real-world problems are included. A list of resources is provided.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
Opto-mechanical design of PANIC
NASA Astrophysics Data System (ADS)
Fried, Josef W.; Baumeister, Harald; Huber, Armin; Laun, Werner; Rohloff, Ralf-Rainer; Concepción Cárdenas, M.
2010-07-01
PANIC, the Panoramic Near-Infrared Camera, is a new instrument for the Calar Alto Observatory. A 4x4 k detector yields a field of view of 0.5x0.5 degrees at a pixel scale of 0.45 arc sec/pixel at the 2.2m telescope. PANIC can be used also at the 3.5m telescope with half the pixel scale. The optics consists of 9 lenses and 3 folding mirrors. Mechanical tolerances are as small as 50 microns for some elements. PANIC will have a low thermal background due to cold stops. Read-out is done with MPIA's own new electronics which allows read-out of 132 channels in parallel. Weight and size limits lead to interesting design features. Here we describe the opto-mechanical design.
Degradation mechanisms and accelerated aging test design
Clough, R L; Gillen, K T
1985-01-01
The fundamental mechanisms underlying the chemical degradation of polymers can change as a function of environmental stress level. When this occurs, it greatly complicates any attempt to use accelerated tests for predicting long-term material degradation behaviors. Understanding how degradation mechanisms can change at different stress levels facilitates both the design and the interpretation of aging tests. Oxidative degradation is a predominant mechanism for many polymers exposed to a variety of different environments in the presence of air, and there are two mechanistic considerations which are widely applicable to material oxidation. One involves a physical process, oxygen diffusion, as a rate-limiting step. This mechanism can predominate at high stress levels. The second is a chemical process, the time-dependent decomposition of peroxide species. This leads to chain branching and can become a rate-controlling factor at lower stress levels involving time-scales applicable to use environments. The authors describe methods for identifying the operation of these mechanisms and illustrate the dramatic influence they can have on the degradation behaviors of a number of polymer types. Several commonly used approaches to accelerated aging tests are discussed in light of the behaviors which result from changes in degradation mechanisms. 9 references, 4 figures.
Copps, Kevin D.; Carnes, Brian R.
2008-04-01
We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.
Mechanism Design for Incentivizing Social Media Contributions
NASA Astrophysics Data System (ADS)
Singh, Vivek K.; Jain, Ramesh; Kankanhalli, Mohan
Despite recent advancements in user-driven social media platforms, tools for studying user behavior patterns and motivations remain primitive. We highlight the voluntary nature of user contributions and that users can choose when (and when not) to contribute to the common media pool. A Game theoretic framework is proposed to study the dynamics of social media networks where contribution costs are individual but gains are common. We model users as rational selfish agents, and consider domain attributes like voluntary participation, virtual reward structure, network effect, and public-sharing to model the dynamics of this interaction. The created model describes the most appropriate contribution strategy from each user's perspective and also highlights issues like 'free-rider' problem and individual rationality leading to irrational (i.e. sub-optimal) group behavior. We also consider the perspective of the system designer who is interested in finding the best incentive mechanisms to influence the selfish end-users so that the overall system utility is maximized. We propose and compare multiple mechanisms (based on optimal bonus payment, social incentive leveraging, and second price auction) to study how a system designer can exploit the selfishness of its users, to design incentive mechanisms which improve the overall task-completion probability and system performance, while possibly still benefiting the individual users.
Mechanical Design of Carbon Ion Optics
NASA Technical Reports Server (NTRS)
Haag, Thomas
2005-01-01
Carbon Ion Optics are expected to provide much longer thruster life due to their resistance to sputter erosion. There are a number of different forms of carbon that have been used for fabricating ion thruster optics. The mechanical behavior of carbon is much different than that of most metals, and poses unique design challenges. In order to minimize mission risk, the behavior of carbon must be well understood, and components designed within material limitations. Thermal expansion of the thruster structure must be compatible with thermal expansion of the carbon ion optics. Specially designed interfaces may be needed so that grid gap and aperture alignment are not adversely affected by dissimilar material properties within the thruster. The assembled thruster must be robust and tolerant of launch vibration. The following paper lists some of the characteristics of various carbon materials. Several past ion optics designs are discussed, identifying strengths and weaknesses. Electrostatics and material science are not emphasized so much as the mechanical behavior and integration of grid electrodes into an ion thruster.
The potential of genetic algorithms for conceptual design of rotor systems
NASA Technical Reports Server (NTRS)
Crossley, William A.; Wells, Valana L.; Laananen, David H.
1993-01-01
The capabilities of genetic algorithms as a non-calculus based, global search method make them potentially useful in the conceptual design of rotor systems. Coupling reasonably simple analysis tools to the genetic algorithm was accomplished, and the resulting program was used to generate designs for rotor systems to match requirements similar to those of both an existing helicopter and a proposed helicopter design. This provides a comparison with the existing design and also provides insight into the potential of genetic algorithms in design of new rotors.
Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications
Technology Transfer Automated Retrieval System (TEKTRAN)
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...
Lansce Wire Scanning Diagnostics Device Mechanical Design
Rodriguez Esparza, Sergio; Batygin, Yuri K.; Gilpatrick, John D.; Gruchalla, Michael E.; Maestas, Alfred J.; Pillai, Chandra; Raybun, Joseph L.; Sattler, F. D.; Sedillo, James Daniel; Smith, Brian G.
2011-01-01
The Accelerator Operations & Technology Division at Los Alamos National Laboratory operates a linear particle accelerator which utilizes 110 wire scanning diagnostics devices to gain position and intensity information of the proton beam. In the upcoming LANSCE improvements, 51 of these wire scanners are to be replaced with a new design, up-to-date technology and off-the-shelf components. This document outlines the requirements for the mechanical design of the LANSCE wire scanner and presents the recently developed linac wire scanner prototype. Additionally, this document presents the design modifications that have been implemented into the fabrication and assembly of this first linac wire scanner prototype. Also, this document will present the design for the second, third, and fourth wire scanner prototypes being developed. Prototypes 2 and 3 belong to a different section of the particle accelerator and therefore have slightly different design specifications. Prototype 4 is a modification of a previously used wire scanner in our facility. Lastly, the paper concludes with a plan for future work on the wire scanner development.
NASA Technical Reports Server (NTRS)
Lawton, Pat
2004-01-01
The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.
Analysis and optimal design of an underactuated finger mechanism for LARM hand
NASA Astrophysics Data System (ADS)
Yao, Shuangji; Ceccarelli, Marco; Carbone, Giuseppe; Zhan, Qiang; Lu, Zhen
2011-09-01
This paper aims to present general design considerations and optimality criteria for underactuated mechanisms in finger designs. Design issues related to grasping task of robotic fingers are discussed. Performance characteristics are outlined as referring to several aspects of finger mechanisms. Optimality criteria of the finger performances are formulated after careful analysis. A general design algorithm is summarized and formulated as a suitable multi-objective optimization problem. A numerical case of an underactuated robot finger design for Laboratory of Robotics and Mechatronics (LARM) hand is illustrated with the aim to show the practical feasibility of the proposed concepts and computations.
FORTE antenna element and release mechanism design
NASA Technical Reports Server (NTRS)
Rohweller, David J.; Butler, Thomas A.
1995-01-01
The Fast On-Orbit Recording of Transient Events (FORTE) satellite being built by Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) has as its most prominent feature a large deployable (11 m by 5 m) log periodic antenna to monitor emissions from electrical storms on the Earth. This paper describes the antenna and the design for the long elements and explains the dynamics of their deployment and the damping system employed. It also describes the unique paraffin-actuated reusable tie-down and release mechanism employed in the system.
FORTE antenna element and release mechanism design
Rohweller, D.J.; Butler, T.Af.
1995-02-01
The Fast On-Orbit Recording of Transient Events (FORTE) satellite being built by Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) has as its most prominent feature a large deployable (11 m by 5 m) log periodic antenna to monitor emissions from electrical storms on the Earth. This paper describes the antenna and the design for the long elements and explains the dynamics of their deployment and the damping system employed. It also describes the unique paraffin-actuated reusable tie-down and release mechanism employed in the system.
Therapeutic Protein Aggregation: Mechanisms, Design, and Control
Roberts, Christopher J.
2014-01-01
While it is well known that proteins are only marginally stable in their folded states, it is often less well appreciated that most proteins are inherently aggregation-prone in their unfolded or partially unfolded states, and the resulting aggregates can be extremely stable and long-lived. For therapeutic proteins, aggregates are a significant risk factor for deleterious immune responses in patients, and can form via a variety of mechanisms. Controlling aggregation using a mechanistic approach may allow improved design of therapeutic protein stability, as a complement to existing design strategies that target desired protein structures and function. Recent results highlight the importance of balancing protein environment with the inherent aggregation propensities of polypeptide chains. PMID:24908382
Mechanical design of the SNS MEBT
Oshatz, D.; DeMello, A.; Doolittle, L.; Luft, P.; Staples, J.; Zachoszcz, A.
2001-06-11
The Lawrence Berkeley National Laboratory (LBNL) is presently designing and building the 2.5 MeV front end for the Spallation Neutron Source (SNS). The front end includes a medium-energy beam transport (MEBT) that carries the 2.5 MeV, 38 mA peak current, H{sup -} beam from the radio frequency quadrupole (RFQ) to the drift tube linac (DTL) through a series of 14 electromagnetic quadrupoles, four rebuncher cavities, and a fast traveling wave chopping system. The beamline contains numerous diagnostic devices, including stripline beam position and phase monitors (BPM), toroid beam current monitors (BCM), and beam profile monitors. Components are mounted on three rafts that are separately supported and aligned. The large number of beam transport and diagnostic components in the 3.6 meter-long beamline necessitates an unusually compact mechanical design.
Using a Genetic Algorithm to Design Nuclear Electric Spacecraft
NASA Technical Reports Server (NTRS)
Pannell, William P.
2003-01-01
The basic approach to to design nuclear electric spacecraft is to generate a group of candidate designs, see how "fit" the design are, and carry best design forward to the next generation. Some designs eliminated, some randomly modified and carried forward.
Design specification for the whole-body algorithm
NASA Technical Reports Server (NTRS)
Fitzjerrell, D. G.
1974-01-01
The necessary requirements and guidelines for the construction of a computer program of the whole-body algorithm are presented. The minimum subsystem models required to effectively simulate the total body response to stresses of interest are (1) cardiovascular (exercise/LBNP/tilt); (2) respiratory (Grodin's model); (3) thermoregulatory (Stolwijk's model); and (4) long-term circulatory fluid and electrolyte (Guyton's model). The whole-body algorithm must be capable of simulating response to stresses from CO2 inhalation, hypoxia, thermal environmental exercise (sitting and supine), LBNP, and tilt (changing body angles in gravity).
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.
1986-01-01
The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.
NASA Astrophysics Data System (ADS)
Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.
2012-09-01
Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.
DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM
The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...
3D-design exploration of CNN algorithms
NASA Astrophysics Data System (ADS)
Spaanenburg, Lambert; Malki, Suleyman
2011-05-01
Multi-dimensional algorithms are hard to implement on classical platforms. Pipelining may exploit instruction-level parallelism, but not in the presence of simultaneous data; threads optimize only within the given restrictions. Tiled architectures do add a dimension to the solution space. With locally a large register store, data parallelism is handled, but only to a dimension. 3-D technologies are meant to add a dimension in the realization. Applied on the device level, it makes each computational node smaller. The interconnections become shorter and hence the network will be condensed. Such advantages will be easily lost at higher implementation levels unless 3-D technologies as multi-cores or chip stacking are also introduced. 3-D technologies scale in space, where (partial) reconfiguration scales in time. The optimal selection over the various implementation levels is algorithm dependent. The paper discusses such principles while applied on the scaling of cellular neural networks (CNN). It illustrates how stacking of reconfigurable chips supports many algorithmic requirements in a defect-insensitive manner. Further the paper explores the potential of chip stacking for multi-modal implementations in a reconfigurable approach to heterogeneous architectures for algorithm domains.
High pressure humidification columns: Design equations, algorithm, and computer code
Enick, R.M.; Klara, S.M.; Marano, J.J.
1994-07-01
This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.
Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces.
Dangi, Siddharth; Orsborn, Amy L; Moorman, Helene G; Carmena, Jose M
2013-07-01
Closed-loop decoder adaptation (CLDA) is an emerging paradigm for achieving rapid performance improvements in online brain-machine interface (BMI) operation. Designing an effective CLDA algorithm requires making multiple important decisions, including choosing the timescale of adaptation, selecting which decoder parameters to adapt, crafting the corresponding update rules, and designing CLDA parameters. These design choices, combined with the specific settings of CLDA parameters, will directly affect the algorithm's ability to make decoder parameters converge to values that optimize performance. In this article, we present a general framework for the design and analysis of CLDA algorithms and support our results with experimental data of two monkeys performing a BMI task. First, we analyze and compare existing CLDA algorithms to highlight the importance of four critical design elements: the adaptation timescale, selective parameter adaptation, smooth decoder updates, and intuitive CLDA parameters. Second, we introduce mathematical convergence analysis using measures such as mean-squared error and KL divergence as a useful paradigm for evaluating the convergence properties of a prototype CLDA algorithm before experimental testing. By applying these measures to an existing CLDA algorithm, we demonstrate that our convergence analysis is an effective analytical tool that can ultimately inform and improve the design of CLDA algorithms. PMID:23607558
Use of Algorithm of Changes for Optimal Design of Heat Exchanger
NASA Astrophysics Data System (ADS)
Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.
2010-05-01
For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.
Mechanical cloak design by direct lattice transformation.
Bückmann, Tiemo; Kadic, Muamer; Schittny, Robert; Wegener, Martin
2015-04-21
Spatial coordinate transformations have helped simplifying mathematical issues and solving complex boundary-value problems in physics for decades already. More recently, material-parameter transformations have also become an intuitive and powerful engineering tool for designing inhomogeneous and anisotropic material distributions that perform wanted functions, e.g., invisibility cloaking. A necessary mathematical prerequisite for this approach to work is that the underlying equations are form invariant with respect to general coordinate transformations. Unfortunately, this condition is not fulfilled in elastic-solid mechanics for materials that can be described by ordinary elasticity tensors. Here, we introduce a different and simpler approach. We directly transform the lattice points of a 2D discrete lattice composed of a single constituent material, while keeping the properties of the elements connecting the lattice points the same. After showing that the approach works in various areas, we focus on elastic-solid mechanics. As a demanding example, we cloak a void in an effective elastic material with respect to static uniaxial compression. Corresponding numerical calculations and experiments on polymer structures made by 3D printing are presented. The cloaking quality is quantified by comparing the average relative SD of the strain vectors outside of the cloaked void with respect to the homogeneous reference lattice. Theory and experiment agree and exhibit very good cloaking performance. PMID:25848021
Mechanical cloak design by direct lattice transformation
Bückmann, Tiemo; Kadic, Muamer; Schittny, Robert; Wegener, Martin
2015-01-01
Spatial coordinate transformations have helped simplifying mathematical issues and solving complex boundary-value problems in physics for decades already. More recently, material-parameter transformations have also become an intuitive and powerful engineering tool for designing inhomogeneous and anisotropic material distributions that perform wanted functions, e.g., invisibility cloaking. A necessary mathematical prerequisite for this approach to work is that the underlying equations are form invariant with respect to general coordinate transformations. Unfortunately, this condition is not fulfilled in elastic–solid mechanics for materials that can be described by ordinary elasticity tensors. Here, we introduce a different and simpler approach. We directly transform the lattice points of a 2D discrete lattice composed of a single constituent material, while keeping the properties of the elements connecting the lattice points the same. After showing that the approach works in various areas, we focus on elastic–solid mechanics. As a demanding example, we cloak a void in an effective elastic material with respect to static uniaxial compression. Corresponding numerical calculations and experiments on polymer structures made by 3D printing are presented. The cloaking quality is quantified by comparing the average relative SD of the strain vectors outside of the cloaked void with respect to the homogeneous reference lattice. Theory and experiment agree and exhibit very good cloaking performance. PMID:25848021
LANSCE wire scanning diagnostics device mechanical design
Rodriguez Esparza, Sergio
2010-01-01
The Los Alamos Neutron Science Center (LANSCE) is one of the major experimental science facilities at the Los Alamos National Laboratory (LANL). The core of LANSCE's work lies in the operation of a powerful linear accelerator, which accelerates protons up to 84% the speed oflight. These protons are used for a variety of purposes, including materials testing, weapons research and isotopes production. To assist in guiding the proton beam, a series of over one hundred wire scanners are used to measure the beam profile at various locations along the half-mile length of the particle accelerator. A wire scanner is an electro-mechanical device that moves a set of wires through a particle beam and measures the secondary emissions from the resulting beam-wire interaction to obtain beam intensity information. When supplemented with data from a position sensor, this information is used to determine the cross-sectional profile of the beam. This measurement allows beam operators to adjust parameters such as acceleration, beam steering, and focus to ensure that the beam reaches its destination as effectively as possible. Some of the current wire scanners are nearly forty years old and are becoming obsolete. The problem with current wire scanners comes in the difficulty of maintenance and reliability. The designs of these wire scanners vary making it difficult to keep spare parts that would work on all designs. Also many of the components are custom built or out-dated technology and are no longer in production.
"Basic MR Relaxation Mechanisms & Contrast Agent Design"
De León-Rodríguez, Luis M.; Martins, André F.; Pinho, Marco; Rofsky, Neil; Sherry, A. Dean
2015-01-01
The diagnostic capabilities of magnetic resonance imaging (MRI) have undergone continuous and substantial evolution by virtue of hardware and software innovations and the development and implementation of exogenous contrast media. Thirty years since the first MRI contrast agent was approved for clinical use, a reliance on MR contrast media persists largely to improve image quality with higher contrast resolution and to provide additional functional characterization of normal and abnormal tissues. Further development of MR contrast media is an important component in the quest for continued augmentation of diagnostic capabilities. In this review we will detail the many important considerations when pursuing the design and use of MR contrast media. We will offer a perspective on the importance of chemical stability, particularly kinetic stability, and how this influences one's thinking about the safety of metal-ligand based contrast agents. We will discuss the mechanisms involved in magnetic resonance relaxation in the context of probe design strategies. A brief description of currently available contrast agents will be accompanied by an in-depth discussion that highlights promising MRI contrast agents in development for future clinical and research applications. Our intention is to give a diverse audience an improved understanding of the factors involved in developing new types of safe and highly efficient MR contrast agents and, at the same time, provide an appreciation of the insights into physiology and disease that newer types of responsive agents can provide. PMID:25975847
ERIC Educational Resources Information Center
Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao
2016-01-01
In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
Evaluation of a segmentation algorithm designed for an FPGA implementation
NASA Astrophysics Data System (ADS)
Schwenk, Kurt; Schönermark, Maria; Huber, Felix
2013-10-01
The present work has to be seen in the context of real-time on-board image evaluation of optical satellite data. With on board image evaluation more useful data can be acquired, the time to get requested information can be decreased and new real-time applications are possible. Because of the relative high processing power in comparison to the low power consumption, Field Programmable Gate Array (FPGA) technology has been chosen as an adequate hardware platform for image processing tasks. One fundamental part for image evaluation is image segmentation. It is a basic tool to extract spatial image information which is very important for many applications such as object detection. Therefore a special segmentation algorithm using the advantages of FPGA technology has been developed. The aim of this work is the evaluation of this algorithm. Segmentation evaluation is a difficult task. The most common way for evaluating the performance of a segmentation method is still subjective evaluation, in which human experts determine the quality of a segmentation. This way is not in compliance with our needs. The evaluation process has to provide a reasonable quality assessment, should be objective, easy to interpret and simple to execute. To reach these requirements a so called Segmentation Accuracy Equality norm (SA EQ) was created, which compares the difference of two segmentation results. It can be shown that this norm is capable as a first quality measurement. Due to its objectivity and simplicity the algorithm has been tested on a specially chosen synthetic test model. In this work the most important results of the quality assessment will be presented.
Design of broadband omnidirectional antireflection coatings using ant colony algorithm.
Guo, X; Zhou, H Y; Guo, S; Luan, X X; Cui, W K; Ma, Y F; Shi, L
2014-06-30
Optimization method which is based on the ant colony algorithm (ACA) is described to optimize antireflection (AR) coating system with broadband omnidirectional characteristics for silicon solar cells incorporated with the solar spectrum (AM1.5 radiation). It's the first time to use ACA method for optimizing the AR coating system. In this paper, for the wavelength range from 400 nm to 1100 nm, the optimized three-layer AR coating system could provide an average reflectance of 2.98% for incident angles from Raveθ+ to 80° and 6.56% for incident angles from 0° to 90°. PMID:24978076
An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1999-01-01
The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.
A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...
NASA Astrophysics Data System (ADS)
Bigdeli, Kasra; Hare, Warren; Tesfamariam, Solomon
2012-04-01
Passive dampers can be used to connect two adjacent structures in order to mitigate earthquakes induced pounding damages. Theoretical and experimental studies have confirmed efficiency and applicability of various connecting devices, such as viscous damper, MR damper, etc. However, few papers employed optimization methods to find the optimal mechanical properties of the dampers, and in most papers, dampers are assumed to be uniform. In this study, we optimized the optimal damping coefficients of viscous dampers considering a general case of non-uniform damping coefficients. Since the derivatives of objective function to damping coefficients are not known, to optimize damping coefficients, a heuristic search method, i.e. the genetic algorithm, is employed. Each structure is modeled as a multi degree of freedom dynamic system consisting of lumped-masses, linear springs and dampers. In order to examine dynamic behavior of the structures, simulations in frequency domain are carried out. A pseudo-excitation based on Kanai-Tajimi spectrum is used as ground acceleration. The optimization results show that relaxing the uniform dampers coefficient assumption generates significant improvement in coupling effectiveness. To investigate efficiency of genetic algorithm, solution quality and solution time of genetic algorithm are compared with those of Nelder-Mead algorithm.
A hybrid algorithm for transonic airfoil and wing design
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Smith, Leigh A.
1987-01-01
The present method for the design of transonic airfoils and wings employs a predictor/corrector approach in which an analysis code calculates the flowfield for an initial geometry, then modifies it on the basis of the difference between calculated and target pressures. This allows the design method to be straightforwardly coupled with any existing analysis code, as presently undertaken with several two- and three-dimensional potential flow codes. The results obtained indicate that the method is robust and accurate, even in the cases of airfoils with strongly supercritical flow and shocks. The design codes are noted to require computational resources typical of current pure-inverse methods.
The design of flux-corrected transport (FCT) algorithms on structured grids
NASA Astrophysics Data System (ADS)
Zalesak, Steven T.
2005-12-01
A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow field; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order algorithms, in flux form, in the various regions of the flow field. In this dissertation, we describe a set of design principles that significantly enhance the accuracy and robustness of FCT algorithms by enhancing the accuracy and robustness of each of the three components individually. These principles include the use of very high order spatial operators in the design of the high order fluxes, the use of non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. We show via standard test problems the kind of algorithm performance one can expect if these design principles are adhered to. We give examples of applications of these design principles in several areas of physics. Finally, we compare the performance of these enhanced algorithms with that of other recent front-capturing methods.
General Structure Design for Fast Image Processing Algorithms Based upon FPGA DSP Slice
NASA Astrophysics Data System (ADS)
Wasfy, Wael; Zheng, Hong
Increasing the speed and accuracy for a fast image processing algorithms during computing the image intensity for low level 3x3 algorithms with different kernel but having the same parallel calculation method is our target to achieve in this paper. FPGA is one of the fastest embedded systems that can be used for implementing the fast image processing image algorithms by using DSP slice module inside the FPGA we aimed to get the advantage of the DSP slice as a faster, accurate, higher number of bits in calculations and different calculated equation maneuver capabilities. Using a higher number of bits during algorithm calculations will lead to a higher accuracy compared with using the same image algorithm calculations with less number of bits, also reducing FPGA resources as minimum as we can and according to algorithm calculations needs is a very important goal to achieve. So in the recommended design we used as minimum DSP slice as we can and as a benefit of using DSP slice is higher calculations accuracy as the DSP capabilities of having 48 bit accuracy in addition and 18 x 18 bit accuracy in multiplication. For proofing the design, Gaussian filter and Sobelx edge detector image processing algorithms have been chosen to be implemented. Also we made a comparison with another design for proofing the improvements of the accuracy and speed of calculations, the other design as will be mentioned later on this paper is using maximum 12 bit accuracy in adding or multiplying calculations.
Vision-based vehicle detection and tracking algorithm design
NASA Astrophysics Data System (ADS)
Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi
2009-12-01
The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.
Internal circulating fluidized bed incineration system and design algorithm.
Tian, W D; Wei, X L; Li, J; Sheng, H Z
2001-04-01
The internal circulating fluidized bed (ICFB) system is characterized with fast combustion, low emission, uniformity of bed temperature and controllability of combustion process. It is a kind of novel clean combustion system, especially for the low-grade fuels, such as municipal solid waste (MSW). The experimental systems of ICFB with and without combustion were designed and set up in this paper. A series of experiments were carried out for further understanding combustion process and characteristics of several design parameters for MSW. Based on the results, a design routine for the ICFB system was suggested for the calculation of energy balance, airflow rate, heat transfer rate, and geometry arrangement. A test system with ICFB combustor has been set up and the test results show that the design of the ICFB system is successful. PMID:11590739
Designer spin systems via inverse statistical mechanics
NASA Astrophysics Data System (ADS)
DiStasio, Robert A., Jr.; Marcotte, Étienne; Car, Roberto; Stillinger, Frank H.; Torquato, Salvatore
2013-10-01
nature of the target radial spin-spin correlation function. In the future, it will be interesting to explore whether such inverse statistical-mechanical techniques could be employed to design materials with desired spin properties.
Sequence-Specific Copolymer Compatibilizers designed via a Genetic Algorithm
NASA Astrophysics Data System (ADS)
Meenakshisundaram, Venkatesh; Patra, Tarak; Hung, Jui-Hsiang; Simmons, David
For several decades, block copolymers have been employed as surfactants to reduce interfacial energy for applications from emulsification to surface adhesion. While the simplest approach employs symmetric diblocks, studies have examined asymmetric diblocks, multiblock copolymers, gradient copolymers, and copolymer-grafted nanoparticles. However, there exists no established approach to determining the optimal copolymer compatibilizer sequence for a given application. Here we employ molecular dynamics simulations within a genetic algorithm to identify copolymer surfactant sequences yielding maximum reductions the interfacial energy of model immiscible polymers. The optimal copolymer sequence depends significantly on surfactant concentration. Most surprisingly, at high surface concentrations, where the surfactant achieves the greatest interfacial energy reduction, specific non-periodic sequences are found to significantly outperform any regularly blocky sequence. This emergence of polymer sequence-specificity within a non-sequenced environment adds to a recent body of work suggesting that specific sequence may have the potential to play a greater role in polymer properties than previously understood. We acknowledge the W. M. Keck Foundation for financial support of this research.
Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Irwin, Ryan W.; Tinker, Michael L.
2005-02-01
Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.
Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Irwin, Ryan W.; Tinker, Michael L.
2005-01-01
Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.
Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms
Irwin, Ryan W.; Tinker, Michael L.
2005-02-06
Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.
Overlay measurement accuracy enhancement by design and algorithm
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Lee, Byongseog; Han, Sangjun; Kim, Myoungsoo; Kwon, Wontaik; Park, Sungki; Choi, DongSub; Lee, Dohwa; Jeon, Sanghuck; Lee, Kangsan; Itzkovich, Tal; Amir, Nuriel; Volkovich, Roie; Herzel, Eitan; Wagner, Mark; El Kodadi, Mohamed
2015-03-01
Advanced design nodes require more complex lithography techniques, such as double patterning, as well as advanced materials like hard masks. This poses new challenge for overlay metrology and process control. In this publication several step are taken to face these challenges. Accurate overlay metrology solutions are demonstrated for advanced memory devices.
A sequential implicit algorithm of chemo-thermo-poro-mechanics for fractured geothermal reservoirs
NASA Astrophysics Data System (ADS)
Kim, Jihoon; Sonnenthal, Eric; Rutqvist, Jonny
2015-03-01
We describe the development of a sequential implicit formulation and algorithm for coupling fluid-heat flow, reactive transport, and geomechanics. We consider changes in pore volume from dissolution caused by chemical reactions, in addition to coupled flow and geomechanics. Moreover, we use the constitutive equations for the multiple porosity model for fractured geothermal reservoirs, employing failure-dependent permeability dynamically and updating it every time step. The proposed sequential algorithm is an extension of the fixed-stress split method to chemo-thermo-poro-mechanics, facilitating the use of existing flow-reactive transport and geomechanics simulators. We first validate a simulator that employs the proposed sequential algorithm, matching the numerical solutions with the analytical solutions such as Terzaghi's and Mandel's problems for poro-mechanics and the reference solutions of chemo-poro-mechanics and chemo-thermo-poro-mechanics in the 1D elastic problems. We also perform convergence test, and the proposed algorithm shows fast convergence, when full iteration is taken, and first order accuracy in time for the staggered approach. We then investigate two test cases: 2D multiple porosity elastic and 3D single porosity elastoplastic problems, and explore the differences in coupled flow and geomechanics with and without reactive transport. We find that the change in pore-volume induced by mineral dissolution can impact on fluid pressure and failure status, followed by significant changes in permeability and flow variables, showing strong interrelations between flow-reactive transport and geomechanics.
On Polymorphic Circuits and Their Design Using Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.
A Computer Environment for Beginners' Learning of Sorting Algorithms: Design and Pilot Evaluation
ERIC Educational Resources Information Center
Kordaki, M.; Miatidis, M.; Kapsampelis, G.
2008-01-01
This paper presents the design, features and pilot evaluation study of a web-based environment--the SORTING environment--for the learning of sorting algorithms by secondary level education students. The design of this environment is based on modeling methodology, taking into account modern constructivist and social theories of learning while at…
Algorithm Design on Network Game of Chinese Chess
NASA Astrophysics Data System (ADS)
Xianmei, Fang
This paper describes the current situation of domestic network game. Contact the present condition of the local network game currently, we inquired to face to a multithread tcp client and server, such as Chinese chess, according to the information, and study the contents and meanings. Combining the Java of basic knowledge, the article study the compiling procedure facing to the object according to the information in Java Swing usage, and the method of the network procedure. The article researched the method and processes of the network procedure carry on the use of Sprocket under the Java Swing. Understood the basic process of compiling procedure using Java and how to compile a network procedure. The most importance is how a pair of machines correspondence-C/S the service system-is carried out. From here, we put forward the data structure,the basic calculate way of the network game- Chinese chess, and how to design and realize the server and client of that procedure. The online games -- chess design can be divided into several modules as follows: server module, client module and the control module.
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
A novel method to design S-box based on chaotic map and genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, Yong; Wong, Kwok-Wo; Li, Changbing; Li, Yang
2012-01-01
The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes.
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm
Chang, Wei-Der
2015-01-01
This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.
Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua
2011-01-01
A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058
Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua
2011-01-01
A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058
New bionic navigation algorithm based on the visual navigation mechanism of bees
NASA Astrophysics Data System (ADS)
Huang, Yufeng; Liu, Yi; Liu, Jianguo
2015-04-01
Through some research on visual navigation mechanisms of flying insects especially honeybees, a novel navigation algorithm integrating entropy flow with Kalman filter has been introduced in this paper. Concepts of entropy image and entropy flow are also introduced, which can characterize topographic features and measure changes of the image respectively. To characterize texture feature and spatial distribution of an image, a new concept of contrast entropy image has been presented in this paper. Applying the contrast entropy image to the navigation algorithm to test its' performance of navigation and comparing with simulation results of intensity entropy image, a conclusion that contrast entropy image performs better and more robust in navigation has been made.
A Annealing Algorithm for Designing Ligands from Receptor Structures.
NASA Astrophysics Data System (ADS)
Zielinski, Peter J.
DEenspace NOVO, a simulated annealing method for designing ligands is described. At a given temperature, ligand fragments are randomly selected and randomly placed within the given receptor cavity, often replacing existing ligand fragments. For each new ligand fragment combination, bonded, nonbonded, polarization and solvation energies of the new ligand-receptor system are compared to the previous system. Acceptance or rejection of the new system is decided using the Boltzmann distribution. Thus, energetically unfavorable fragment switches are sometimes accepted, sacrificing immediate energy gains in the interest of finding the system with the globally minimum energy. By lowering the temperature, the rate of unfavorable switches decreases and energetically favorable combinations become difficult to change. The process is halted when the frequency of switches becomes too small. As a test of the method, DEenspace NOVO predicted the positions of important ligand fragments for neuraminidase that are in accord with the natural ligand, sialic acid.
Evolutionary algorithm for the neutrino factory front end design
Poklonskiy, Alexey A.; Neuffer, David; /Fermilab
2009-01-01
The Neutrino Factory is an important tool in the long-term neutrino physics program. Substantial effort is put internationally into designing this facility in order to achieve desired performance within the allotted budget. This accelerator is a secondary beam machine: neutrinos are produced by means of the decay of muons. Muons, in turn, are produced by the decay of pions, produced by hitting the target by a beam of accelerated protons suitable for acceleration. Due to the physics of this process, extra conditioning of the pion beam coming from the target is needed in order to effectively perform subsequent acceleration. The subsystem of the Neutrino Factory that performs this conditioning is called Front End, its main performance characteristic is the number of the produced muons.
NASA Astrophysics Data System (ADS)
Lin, Jeng-Wen; Shen, Pu Fun; Wen, Hao-Ping
2015-10-01
The application of a repetitive control mechanism for use in a mechanical control system has been a topic of investigation. The fundamental purpose of repetitive control is to eliminate disturbances in a mechanical control system. This paper presents two different repetitive control laws using individual types of basis function feedback and their combinations. These laws adjust the command given to a feedback control system to eliminate tracking errors, generally resulting from periodic disturbance. Periodic errors can be reduced through linear basis functions using regression and a genetic algorithm. The results illustrate that repetitive control is most effective method for eliminating disturbances. When the data are stabilized, the tracking error of the obtained convergence value, 10-14, is the optimal solution, verifying that the proposed regression and genetic algorithm can satisfactorily reduce periodic errors.
The Design of Flux-Corrected Transport (FCT) Algorithms for Structured Grids
NASA Astrophysics Data System (ADS)
Zalesak, Steven T.
A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. This chapter confines itself to the design of FCT algorithms for structured grids, using a finite volume formalism, for this is the area with which the present author is most familiar. The reader will find excellent material on the design of FCT algorithms for unstructured grids, using both finite volume and finite element formalisms, in the chapters by Professors Löhner, Baum, Kuzmin, Turek, and Möller in the present volume.
Small, high pressure ratio compressor: Aerodynamic and mechanical design
NASA Technical Reports Server (NTRS)
Bryce, C. A.; Erwin, J. R.; Perrone, G. L.; Nelson, E. L.; Tu, R. K.; Bosco, A.
1973-01-01
The Small, High-Pressure-Ratio Compressor Program was directed toward the analysis, design, and fabrication of a centrifugal compressor providing a 6:1 pressure ratio and an airflow rate of 2.0 pounds per second. The program consists of preliminary design, detailed areodynamic design, mechanical design, and mechanical acceptance tests. The preliminary design evaluate radial- and backward-curved blades, tandem bladed impellers, impeller-and diffuser-passage boundary-layer control, and vane, pipe, and multiple-stage diffusers. Based on this evaluation, a configuration was selected for detailed aerodynamic and mechanical design. Mechanical acceptance test was performed to demonstrate that mechanical design objectives of the research package were met.
Epitope prediction algorithms for peptide-based vaccine design.
Florea, Liliana; Halldórsson, Bjarni; Kohlbacher, Oliver; Schwartz, Russell; Hoffman, Stephen; Istrail, Sorin
2003-01-01
Peptide-based vaccines, in which small peptides derived from target proteins (eptiopes) are used to provoke an immune reaction, have attracted considerable attention recently as a potential means both of treating infectious diseases and promoting the destruction of cancerous cells by a patient's own immune system. With the availability of large sequence databases and computers fast enough for rapid processing of large numbers of peptides, computer aided design of peptide-based vaccines has emerged as a promising approach to screening among billions of possible immune-active peptides to find those likely to provoke an immune response to a particular cell type. In this paper, we describe the development of three novel classes of methods for the prediction problem. We present a quadratic programming approach that can be trained on quantitative as well as qualitative data. The second method uses linear programming to counteract the fact that our training data contains mostly positive examples. The third class of methods uses sequence profiles obtained by clustering known epitopes to score candidate peptides. By integrating these methods, using a simple voting heuristic, we achieve improved accuracy over the state of the art. PMID:16826643
Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft.
Ning, Xin; Yuan, Jianping; Yue, Xiaokui
2016-01-01
A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755
Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft
Ning, Xin; Yuan, Jianping; Yue, Xiaokui
2016-01-01
A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755
Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft
NASA Astrophysics Data System (ADS)
Ning, Xin; Yuan, Jianping; Yue, Xiaokui
2016-03-01
A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions.
Integrated Turbopump Thermo-Mechanical Design and Analysis Tools
NASA Astrophysics Data System (ADS)
Platt, Mike
2002-07-01
This viewgraph presentation provides information on the thermo-mechanical design and analysis tools used to control the steady and transient thermo-mechanical effects which drive life, reliability, and cost. The thermo-mechanical analysis tools provide upfront design capability by effectively leveraging existing component design tools to analyze and control: fits, clearance, preload; cooling requirements; stress levels, LCF (low cycle fatigue) limits, and HCF (high cycle fatigue) margin.
Design of a blade stiffened composite panel by a genetic algorithm
NASA Technical Reports Server (NTRS)
Nagendra, S.; Haftka, R. T.; Gurdal, Z.
1993-01-01
Genetic algorithms (GAs) readily handle discrete problems, and can be made to generate many optima, as is presently illustrated for the case of design for minimum-weight stiffened panels with buckling constraints. The GA discrete design procedure proved superior to extant alternatives for both stiffened panels with cutouts and without cutouts. High computational costs are, however, associated with this discrete design approach at the current level of its development.
Design of a blade stiffened composite panel by a genetic algorithm
NASA Astrophysics Data System (ADS)
Nagendra, S.; Haftka, R. T.; Gurdal, Z.
1993-04-01
Genetic algorithms (GAs) readily handle discrete problems, and can be made to generate many optima, as is presently illustrated for the case of design for minimum-weight stiffened panels with buckling constraints. The GA discrete design procedure proved superior to extant alternatives for both stiffened panels with cutouts and without cutouts. High computational costs are, however, associated with this discrete design approach at the current level of its development.
Analysis and design of algorithm-based fault-tolerant systems
NASA Technical Reports Server (NTRS)
Nair, V. S. Sukumaran
1990-01-01
An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.
Optimization of experimental design in fMRI: a general framework using a genetic algorithm.
Wager, Tor D; Nichols, Thomas E
2003-02-01
This article describes a method for selecting design parameters and a particular sequence of events in fMRI so as to maximize statistical power and psychological validity. Our approach uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect to single or multiple measures of fitness. Two strengths of the GA framework are that (1) it operates with any sort of model, allowing for very specific parameterization of experimental conditions, including nonstandard trial types and experimentally observed scanner autocorrelation, and (2) it is flexible with respect to fitness criteria, allowing optimization over known or novel fitness measures. We describe how genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore the space of possible fMRI design parameters, with the goal of providing information about optimal design choices for several types of designs. In our simulations, we considered three fitness measures: contrast estimation efficiency, hemodynamic response estimation efficiency, and design counterbalancing. Although there are inherent trade-offs between these three fitness measures, GA optimization can produce designs that outperform random designs on all three criteria simultaneously. PMID:12595184
NASA Technical Reports Server (NTRS)
Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg
2003-01-01
The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.
MSFC Three Point Docking Mechanism design review
NASA Technical Reports Server (NTRS)
Schaefer, Otto; Ambrosio, Anthony
1992-01-01
In the next few decades, we will be launching expensive satellites and space platforms that will require recovery for economic reasons, because of initial malfunction, servicing, repairs, or out of a concern for post lifetime debris removal. The planned availability of a Three Point Docking Mechanism (TPDM) is a positive step towards an operational satellite retrieval infrastructure. This study effort supports NASA/MSFC engineering work in developing an automated docking capability. The work was performed by the Grumman Space & Electronics Group as a concept evaluation/test for the Tumbling Satellite Retrieval Kit. Simulation of a TPDM capture was performed in Grumman's Large Amplitude Space Simulator (LASS) using mockups of both parts (the mechanism and payload). Similar TPDM simulation activities and more extensive hardware testing was performed at NASA/MSFC in the Flight Robotics Laboratory and Space Station/Space Operations Mechanism Test Bed (6-DOF Facility).
A new second-order integration algorithm for simulating mechanical dynamic systems
NASA Technical Reports Server (NTRS)
Howe, R. M.
1989-01-01
A new integration algorithm which has the simplicity of Euler integration but exhibits second-order accuracy is described. In fixed-step numerical integration of differential equations for mechanical dynamic systems the method represents displacement and acceleration variables at integer step times and velocity variables at half-integer step times. Asymptotic accuracy of the algorithm is twice that of trapezoidal integration and ten times that of second-order Adams-Bashforth integration. The algorithm is also compatible with real-time inputs when used for a real-time simulation. It can be used to produce simulation outputs at double the integration frame rate, i.e., at both half-integer and integer frame times, even though it requires only one evaluation of state-variable derivatives per integration step. The new algorithm is shown to be especially effective in the simulation of lightly-damped structural modes. Both time-domain and frequency-domain accuracy comparisons with traditional integration methods are presented. Stability of the new algorithm is also examined.
Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley
2009-01-01
Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!
Wang, Kung-Jeng; Adrian, Angelia Melani; Chen, Kun-Huang; Wang, Kung-Min
2015-04-01
Recently, the use of artificial intelligence based data mining techniques for massive medical data classification and diagnosis has gained its popularity, whereas the effectiveness and efficiency by feature selection is worthy to further investigate. In this paper, we presents a novel method for feature selection with the use of opposite sign test (OST) as a local search for the electromagnetism-like mechanism (EM) algorithm, denoted as improved electromagnetism-like mechanism (IEM) algorithm. Nearest neighbor algorithm is served as a classifier for the wrapper method. The proposed IEM algorithm is compared with nine popular feature selection and classification methods. Forty-six datasets from the UCI repository and eight gene expression microarray datasets are collected for comprehensive evaluation. Non-parametric statistical tests are conducted to justify the performance of the methods in terms of classification accuracy and Kappa index. The results confirm that the proposed IEM method is superior to the common state-of-art methods. Furthermore, we apply IEM to predict the occurrence of Type 2 diabetes mellitus (DM) after a gestational DM. Our research helps identify the risk factors for this disease; accordingly accurate diagnosis and prognosis can be achieved to reduce the morbidity and mortality rate caused by DM. PMID:25677947
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.
Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic
Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
Rolamite - A new mechanical design concept
NASA Technical Reports Server (NTRS)
Wilkes, D. F.
1967-01-01
Rolamite, a mechanical suspension system, provides substantial reductions in friction in the realm of extremely low bearing pressures. In addition, rolamite devices are easily microminiaturized, are extremely tolerant of production variations and are inherently capable of virtually all functions to construct most electromechanical devices.
The Design of Flux-Corrected Transport (FCT) Algorithms For Structured Grids
NASA Astrophysics Data System (ADS)
Zalesak, Steven T.
A given flux-corrected transport (FCT) algorithm consists of three components: 1) a high order algorithm to which it reduces in smooth parts of the flow; 2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and 3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy.
Validation of space/ground antenna control algorithms using a computer-aided design tool
NASA Technical Reports Server (NTRS)
Gantenbein, Rex E.
1995-01-01
The validation of the algorithms for controlling the space-to-ground antenna subsystem for Space Station Alpha is an important step in assuring reliable communications. These algorithms have been developed and tested using a simulation environment based on a computer-aided design tool that can provide a time-based execution framework with variable environmental parameters. Our work this summer has involved the exploration of this environment and the documentation of the procedures used to validate these algorithms. We have installed a variety of tools in a laboratory of the Tracking and Communications division for reproducing the simulation experiments carried out on these algorithms to verify that they do meet their requirements for controlling the antenna systems. In this report, we describe the processes used in these simulations and our work in validating the tests used.
Infrastructure Retrofit Design via Composite Mechanics
NASA Technical Reports Server (NTRS)
Chamis, Christos, C.; Gotsis,Pascal K.
1998-01-01
Select applications are described to illustrate the concept for retrofitting reinforced concrete infrastructure with fiber reinforced plastic laminates. The concept is first illustrated by using an axially loaded reinforced concrete column. A reinforced concrete arch and a dome are then used to illustrate the versatility of the concept. Advanced methods such as finite element structural analysis and progressive structural fracture are then used to evaluate the retrofitting laminate adequacy. Results obtains show that retrofits can be designed to double and even triple the as-designed load of the select reinforced concrete infrastructures.
The design and results of an algorithm for intelligent ground vehicles
NASA Astrophysics Data System (ADS)
Duncan, Matthew; Milam, Justin; Tote, Caleb; Riggins, Robert N.
2010-01-01
This paper addresses the design, design method, test platform, and test results of an algorithm used in autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its 2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this autonomous robotic system, beginning with the design process and ending with test results for both simulation and real environments.
An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures
Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf
2016-01-01
Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762
An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.
Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf
2016-01-01
Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762
NASA Astrophysics Data System (ADS)
Foudray, Angela Marie Klohs
Detecting, quantifying and visualizing biochemical mechanism in a living system without perturbing function is the goal of the instrument and algorithms designed in this thesis. Biochemical mechanisms of cells have long been known to be dependent on the signals they receive from their environment. Studying biological processes of cells in-vitro can vastly distort their function, since you are removing them from their natural chemical signaling environment. Mice have become the biological system of choice for various areas of biomedical research due to their genetic and physiological similarities with humans, the relatively low cost of their care, and their quick breeding cycle. Drug development and efficacy assessment along with disease detection, management, and mechanism research all have benefited from the use of small animal models of human disease. A high resolution, high sensitivity, three-dimensional (3D) positioning positron emission tomography (PET) detector system was designed through device characterization and Monte Carlo simulation. Position-sensitive avalanche photodiodes (PSAPDs) were characterized in various packaging configurations; coupled to various configurations of lutetium oxyorthosilicate (LSO) scintillation crystals. Forty novelly packaged final design devices were constructed and characterized, each providing characteristics superior to commercially available scintillation detectors used in small animal imaging systems: ˜1mm crystal identification, 14-15% of 511 keV energy resolution, and averaging 1.9 to 5.6 ns coincidence time resolution. A closed-cornered box-shaped detector configuration was found to provide optimal photon sensitivity (˜10.5% in the central plane) using dual LSO-PSAPD scintillation detector modules and Monte Carlo simulation. Standard figures of merit were used to determine optimal system acquisition parameters. A realistic model for constituent devices was developed for understanding the signals reported by the
Rationally designing the mechanical properties of protein hydrogels
NASA Astrophysics Data System (ADS)
Cao, Yi
Naturally occurring biomaterials possess diverse mechanical properties, which are critical to their unique biological functions. However, it remains challenging to rationally control the mechanical properties of synthetic biomaterials. Here we provide a bottom-up approach to rationally design the mechanical properties of protein-based hydrogels. We first use atomic fore microscope (AFM) based single-molecule force spectroscopy to characterize the mechanical stability of individual protein building blocks. We then rationally design the mechanical properties of hydrogels by selecting different combination of protein building blocks of known mechanical properties. As a proof-of-principle, we demonstrate the engineering of hydrogels of distinct extensibility and toughness. This simple combinatorial approach allows direct translation of the mechanical properties of proteins from the single molecule level to the macroscopic level and represents an important step towards rationally designing the mechanical properties of biomaterials.
A Proposal of CAD Mechanism for Design Knowledge Management
NASA Astrophysics Data System (ADS)
Nomaguchi, Yutaka; Yoshioka, Masaharu; Tomiyama, Tetsuo
In this paper, we propose a fundamental idea of a new CAD mechanism to facilitate design knowledge management. This mechanism encourages a designer to externalise his/her knowledge during a design process and facilitates sharing and reuse of such externalised design knowledge in later stages. We also describe the implementation of this idea called DDMS (Design Documentation Management System). DDMS works as a front end to KIEF (Knowledge Intensive Engineering Framework), which we have been developing. We also illustrate an example of machining tool design to demonstrate the features of DDMS.
The Mechanization of Design and Manufacturing.
ERIC Educational Resources Information Center
Gunn, Thomas G.
1982-01-01
Describes changes in the design of products and in planning, managing, and coordinating their manufacture. Focuses on discrete-products manufacturing industries, encompassing the fabrication and assembly of automobiles, aircraft, computers and microelectric components of computers, furniture, appliances, foods, clothing, building materials, and…
Designing berthing mechanisms for international compatibility
NASA Technical Reports Server (NTRS)
Winch, John; Gonzalez-Vallejo, Juan J.
1991-01-01
The paper examines the technological issues regarding common berthing interfaces for the Space Station Freedom and pressurized modules from U.S., European, and Japanese space programs. The development of the common berthing mechanism (CBM) is based on common requirements concerning specifications, launch environments, and the unique requirements of ESA's Man-Tended Free Flyer. The berthing mechanism is composed of an active and a passive half, a remote manipulator system, 4 capture-latch assemblies, 16 structural bolts, and a pressure gage to verify equalization. Extensive graphic and verbal descriptions of each element are presented emphasizing the capture-latch motion and powered-bolt operation. The support systems to complete the interface are listed, and the manufacturing requirements for consistent fabrication are discussed to ensure effective international development.
Dietrich, Arne; Haider, Hilde
2015-08-01
Creative thinking is arguably the pinnacle of cerebral functionality. Like no other mental faculty, it has been omnipotent in transforming human civilizations. Probing the neural basis of this most extraordinary capacity, however, has been doggedly frustrated. Despite a flurry of activity in cognitive neuroscience, recent reviews have shown that there is no coherent picture emerging from the neuroimaging work. Based on this, we take a different route and apply two well established paradigms to the problem. First is the evolutionary framework that, despite being part and parcel of creativity research, has no informed experimental work in cognitive neuroscience. Second is the emerging prediction framework that recognizes predictive representations as an integrating principle of all cognition. We show here how the prediction imperative revealingly synthesizes a host of new insights into the way brains process variation-selection thought trials and present a new neural mechanism for the partial sightedness in human creativity. Our ability to run offline simulations of expected future environments and action outcomes can account for some of the characteristic properties of cultural evolutionary algorithms running in brains, such as degrees of sightedness, the formation of scaffolds to jump over unviable intermediate forms, or how fitness criteria are set for a selection process that is necessarily hypothetical. Prospective processing in the brain also sheds light on how human creating and designing - as opposed to biological creativity - can be accompanied by intentions and foresight. This paper raises questions about the nature of creative thought that, as far as we know, have never been asked before. PMID:25304474
Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard
2005-01-01
Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.
General parameter relations for the Shinnar-Le Roux pulse design algorithm.
Lee, Kuan J
2007-06-01
The magnetization ripple amplitudes from a pulse designed by the Shinnar-Le Roux algorithm are a non-linear function of the Shinnar-Le Roux A and B polynomial ripples. In this paper, the method of Pauly et al. [J. Pauly, P. Le Roux, D. Nishimura, A. Macovski, Parameter relations for the Shinnar-Le Roux selective excitation pulse design algorithm, IEEE Transactions on Medical Imaging 10 (1991) 56-65.] has been extended to derive more general parameter relations. These relations can be used for cases outside the five classes considered by Pauly et al., in particular excitation pulses for flip angles that are not small or 90 degrees. Use of the new relations, together with an iterative procedure to obtain polynomials with the specified ripples from the Parks-McClellan algorithm, are shown to give simulated slice profiles that have the desired ripple amplitudes. PMID:17408999
A Dynamic Programming Algorithm for Optimal Design of Tidal Power Plants
NASA Astrophysics Data System (ADS)
Nag, B.
2013-03-01
A dynamic programming algorithm is proposed and demonstrated on a test case to determine the optimum operating schedule of a barrage tidal power plant to maximize the energy generation over a tidal cycle. Since consecutive sets of high and low tides can be predicted accurately for any tidal power plant site, this algorithm can be used to calculate the annual energy generation for different technical configurations of the plant. Thus an optimal choice of a tidal power plant design can be made from amongst different design configurations yielding the least cost of energy generation. Since this algorithm determines the optimal time of operation of sluice gate opening and turbine gates opening to maximize energy generation over a tidal cycle, it can also be used to obtain the annual schedule of operation of a tidal power plant and the minute-to-minute energy generation, for dissemination amongst power distribution utilities.