Science.gov

Sample records for algorithms specifically designed

  1. Sequence-Specific Copolymer Compatibilizers designed via a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Meenakshisundaram, Venkatesh; Patra, Tarak; Hung, Jui-Hsiang; Simmons, David

    For several decades, block copolymers have been employed as surfactants to reduce interfacial energy for applications from emulsification to surface adhesion. While the simplest approach employs symmetric diblocks, studies have examined asymmetric diblocks, multiblock copolymers, gradient copolymers, and copolymer-grafted nanoparticles. However, there exists no established approach to determining the optimal copolymer compatibilizer sequence for a given application. Here we employ molecular dynamics simulations within a genetic algorithm to identify copolymer surfactant sequences yielding maximum reductions the interfacial energy of model immiscible polymers. The optimal copolymer sequence depends significantly on surfactant concentration. Most surprisingly, at high surface concentrations, where the surfactant achieves the greatest interfacial energy reduction, specific non-periodic sequences are found to significantly outperform any regularly blocky sequence. This emergence of polymer sequence-specificity within a non-sequenced environment adds to a recent body of work suggesting that specific sequence may have the potential to play a greater role in polymer properties than previously understood. We acknowledge the W. M. Keck Foundation for financial support of this research.

  2. Design specification for the whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.

    1974-01-01

    The necessary requirements and guidelines for the construction of a computer program of the whole-body algorithm are presented. The minimum subsystem models required to effectively simulate the total body response to stresses of interest are (1) cardiovascular (exercise/LBNP/tilt); (2) respiratory (Grodin's model); (3) thermoregulatory (Stolwijk's model); and (4) long-term circulatory fluid and electrolyte (Guyton's model). The whole-body algorithm must be capable of simulating response to stresses from CO2 inhalation, hypoxia, thermal environmental exercise (sitting and supine), LBNP, and tilt (changing body angles in gravity).

  3. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  4. comets (Constrained Optimization of Multistate Energies by Tree Search): A Provable and Efficient Protein Design Algorithm to Optimize Binding Affinity and Specificity with Respect to Sequence.

    PubMed

    Hallen, Mark A; Donald, Bruce R

    2016-05-01

    Practical protein design problems require designing sequences with a combination of affinity, stability, and specificity requirements. Multistate protein design algorithms model multiple structural or binding "states" of a protein to address these requirements. comets provides a new level of versatile, efficient, and provable multistate design. It provably returns the minimum with respect to sequence of any desired linear combination of the energies of multiple protein states, subject to constraints on other linear combinations. Thus, it can target nearly any combination of affinity (to one or multiple ligands), specificity, and stability (for multiple states if needed). Empirical calculations on 52 protein design problems showed comets is far more efficient than the previous state of the art for provable multistate design (exhaustive search over sequences). comets can handle a very wide range of protein flexibility and can enumerate a gap-free list of the best constraint-satisfying sequences in order of objective function value. PMID:26761641

  5. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  6. Fast ordering algorithm for exact histogram specification.

    PubMed

    Nikolova, Mila; Steidl, Gabriele

    2014-12-01

    This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881

  7. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Linden, Derek; Hornby, Greg; Lohn, Jason; Globus, Al; Krishunkumor, K.

    2006-01-01

    Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to

  8. GPU-specific reformulations of image compression algorithms

    NASA Astrophysics Data System (ADS)

    Matela, Jiří; Holub, Petr; Jirman, Martin; Årom, Martin

    2012-10-01

    Image compression has a number of applications in various fields, where processing throughput and/or latency is a crucial attribute and the main limitation of state-of-the-art implementations of compression algorithms. At the same time contemporary GPU platforms provide tremendous processing power but they call for specific algorithm design. We discuss key components of successful design of compression algorithms for GPUs and demonstrate this on JPEG and JPEG2000 implementations, each of which contains several types of algorithms requiring different approaches to efficient parallelization for GPUs. Performance evaluation of the optimized JPEG and JPEG2000 chain is used to demonstrate the importance of various aspects of GPU programming, especially with respect to real-time applications.

  9. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  10. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  11. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  12. Fashion sketch design by interactive genetic algorithms

    NASA Astrophysics Data System (ADS)

    Mok, P. Y.; Wang, X. X.; Xu, J.; Kwok, Y. L.

    2012-11-01

    Computer aided design is vitally important for the modern industry, particularly for the creative industry. Fashion industry faced intensive challenges to shorten the product development process. In this paper, a methodology is proposed for sketch design based on interactive genetic algorithms. The sketch design system consists of a sketch design model, a database and a multi-stage sketch design engine. First, a sketch design model is developed based on the knowledge of fashion design to describe fashion product characteristics by using parameters. Second, a database is built based on the proposed sketch design model to define general style elements. Third, a multi-stage sketch design engine is used to construct the design. Moreover, an interactive genetic algorithm (IGA) is used to accelerate the sketch design process. The experimental results have demonstrated that the proposed method is effective in helping laypersons achieve satisfied fashion design sketches.

  13. URPD: a specific product primer design tool

    PubMed Central

    2012-01-01

    Background Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. Findings URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. Conclusions URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/. PMID:22713312

  14. Advanced CHP Control Algorithms: Scope Specification

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  15. Optimization Algorithm for Designing Diffractive Optical Elements

    NASA Astrophysics Data System (ADS)

    Agudelo, Viviana A.; Orozco, Ricardo Amézquita

    2008-04-01

    Diffractive Optical Elements (DOEs) are commonly used in many applications such as laser beam shaping, recording of micro reliefs, wave front analysis, metrology and many others where they can replace single or multiple conventional optical elements (diffractive or refractive). One of the most versatile way to produce them, is to use computer assisted techniques for their design and optimization, as well as optical or electron beam micro-lithography techniques for the final fabrication. The fundamental figures of merit involved in the optimization of such devices are both the diffraction efficiency and the signal to noise ratio evaluated in the reconstructed wave front at the image plane. A design and optimization algorithm based on the error—reduction method (Gerchberg and Saxton) is proposed to obtain binary discrete phase-only Fresnel DOEs that will be used to produce specific intensity patterns. Some experimental results were obtained using a spatial light modulator acting as a binary programmable diffractive phase element. Although the DOEs optimized here are discrete in phase, they present an acceptable signal noise relation and diffraction efficiency.

  16. On the design, analysis, and implementation of efficient parallel algorithms

    SciTech Connect

    Sohn, S.M.

    1989-01-01

    There is considerable interest in developing algorithms for a variety of parallel computer architectures. This is not a trivial problem, although for certain models great progress has been made. Recently, general-purpose parallel machines have become available commercially. These machines possess widely varying interconnection topologies and data/instruction access schemes. It is important, therefore, to develop methodologies and design paradigms for not only synthesizing parallel algorithms from initial problem specifications, but also for mapping algorithms between different architectures. This work has considered both of these problems. A systolic array consists of a large collection of simple processors that are interconnected in a uniform pattern. The author has studied in detain the problem of mapping systolic algorithms onto more general-purpose parallel architectures such as the hypercube. The hypercube architecture is notable due to its symmetry and high connectivity, characteristics which are conducive to the efficient embedding of parallel algorithms. Although the parallel-to-parallel mapping techniques have yielded efficient target algorithms, it is not surprising that an algorithm designed directly for a particular parallel model would achieve superior performance. In this context, the author has developed hypercube algorithms for some important problems in speech and signal processing, text processing, language processing and artificial intelligence. These algorithms were implemented on a 64-node NCUBE/7 hypercube machine in order to evaluate their performance.

  17. General lossless planar coupler design algorithms.

    PubMed

    Vance, Rod

    2015-08-01

    This paper reviews and extends two classes of algorithms for the design of planar couplers with any unitary transfer matrix as design goals. Such couplers find use in optical sensing for fading free interferometry, coherent optical network demodulation, and also for quantum state preparation in quantum optical experiments and technology. The two classes are (1) "atomic coupler algorithms" decomposing a unitary transfer matrix into a planar network of 2×2 couplers, and (2) "Lie theoretic algorithms" concatenating unit cell devices with variable phase delay sets that form canonical coordinates for neighborhoods in the Lie group U(N), so that the concatenations realize any transfer matrix in U(N). As well as review, this paper gives (1) a Lie theoretic proof existence proof showing that both classes of algorithms work and (2) direct proofs of the efficacy of the "atomic coupler" algorithms. The Lie theoretic proof strengthens former results. 5×5 couplers designed by both methods are compared by Monte Carlo analysis, which would seem to imply atomic rather than Lie theoretic methods yield designs more resilient to manufacturing imperfections. PMID:26367295

  18. Reflight certification software design specifications

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The PDSS/IMC Software Design Specification for the Payload Development Support System (PDSS)/Image Motion Compensator (IMC) is contained. The PDSS/IMC is to be used for checkout and verification of the IMC flight hardware and software by NASA/MSFC.

  19. Fast Fourier Transform algorithm design and tradeoffs

    NASA Technical Reports Server (NTRS)

    Kamin, Ray A., III; Adams, George B., III

    1988-01-01

    The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.

  20. Design of PID-type controllers using multiobjective genetic algorithms.

    PubMed

    Herreros, Alberto; Baeyens, Enrique; Perán, José R

    2002-10-01

    The design of a PID controller is a multiobjective problem. A plant and a set of specifications to be satisfied are given. The designer has to adjust the parameters of the PID controller such that the feedback interconnection of the plant and the controller satisfies the specifications. These specifications are usually competitive and any acceptable solution requires a tradeoff among them. An approach for adjusting the parameters of a PID controller based on multiobjective optimization and genetic algorithms is presented in this paper. The MRCD (multiobjective robust control design) genetic algorithm has been employed. The approach can be easily generalized to design multivariable coupled and decentralized PID loops and has been successfully validated for a large number of experimental cases. PMID:12398277

  1. Instrument design and optimization using genetic algorithms

    SciTech Connect

    Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-10-15

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.

  2. Specific optimization of genetic algorithm on special algebras

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Novak, Vilem; Dyba, Martin; Schenk, Jiri

    2016-06-01

    Searching for complex finite algebras can be succesfully done by the means of genetic algorithm as we showed in former works. This genetic algorithm needs specific optimization of crossover and mutation. We present details about these optimizations which are already implemented in software application for this task - EQCreator.

  3. Fuzzy logic and guidance algorithm design

    SciTech Connect

    Leng, G.

    1994-12-31

    This paper explores the use of fuzzy logic for the design of a terminal guidance algorithm for an air to surface missile against a stationary target. The design objectives are (1) a smooth transition, at lock-on, (2) large impact angles and (3) self-limiting acceleration commands. The method of reverse kinematics is used in the design of the membership functions and the rule base. Simulation results for a Mach 0.8 missile with a 6g acceleration limit are compared with a traditional proportional navigation scheme.

  4. Multidisciplinary design optimization using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Unal, Resit

    1994-12-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  5. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  6. Fuzzy controller design by parallel genetic algorithms

    NASA Astrophysics Data System (ADS)

    Mondelli, G.; Castellano, G.; Attolico, Giovanni; Distante, Arcangelo

    1998-03-01

    Designing a fuzzy system involves defining membership functions and constructing rules. Carrying out these two steps manually often results in a poorly performing system. Genetic Algorithms (GAs) has proved to be a useful tool for designing optimal fuzzy controller. In order to increase the efficiency and effectiveness of their application, parallel GAs (PAGs), evolving synchronously several populations with different balances between exploration and exploitation, have been implemented using a SIMD machine (APE100/Quadrics). The parameters to be identified are coded in such a way that the algorithm implicitly provides a compact fuzzy controller, by finding only necessary rules and removing useless inputs from them. Early results, working on a fuzzy controller implementing the wall-following task for a real vehicle as a test case, provided better fitness values in less generations with respect to previous experiments made using a sequential implementation of GAs.

  7. Material design using surrogate optimization algorithm

    NASA Astrophysics Data System (ADS)

    Khadke, Kunal R.

    Nanocomposite ceramics have been widely studied in order to tailor desired properties at high temperatures. Methodologies for development of material design are still under effect . While finite element modeling (FEM) provides significant insight on material behavior, few design researchers have addressed the design paradox that accompanies this rapid design space expansion. A surrogate optimization model management framework has been proposed to make this design process tractable. In the surrogate optimization material design tool, the analysis cost is reduced by performing simulations on the surrogate model instead of high density finite element model. The methodology is incorporated to find the optimal number of silicon carbide (SiC) particles, in a silicon-nitride Si3N 4 composite with maximum fracture energy [2]. Along with a deterministic optimization algorithm, model uncertainties have also been considered with the use of robust design optimization (RDO) method ensuring a design of minimum sensitivity to changes in the parameters. These methodologies applied to nanocomposites design have a signicant impact on cost and design cycle time reduced.

  8. Designing conducting polymers using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Giro, R.; Cyrillo, M.; Galvão, D. S.

    2002-11-01

    We have developed a new methodology to design conducting polymers with pre-specified properties. The methodology is based on the use of genetic algorithms (GAs) coupled to Negative Factor Counting technique. We present the results for a case study of polyanilines, one of the most important families of conducting polymers. The methodology proved to be able of generating automatic solutions for the problem of determining the optimum relative concentration for binary and ternary disordered polyaniline alloys exhibiting metallic properties. The methodology is completely general and can be used to design new classes of materials.

  9. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  10. Predicting Resistance Mutations Using Protein Design Algorithms

    SciTech Connect

    Frey, K.; Georgiev, I; Donald, B; Anderson, A

    2010-01-01

    Drug resistance resulting from mutations to the target is an unfortunate common phenomenon that limits the lifetime of many of the most successful drugs. In contrast to the investigation of mutations after clinical exposure, it would be powerful to be able to incorporate strategies early in the development process to predict and overcome the effects of possible resistance mutations. Here we present a unique prospective application of an ensemble-based protein design algorithm, K*, to predict potential resistance mutations in dihydrofolate reductase from Staphylococcus aureus using positive design to maintain catalytic function and negative design to interfere with binding of a lead inhibitor. Enzyme inhibition assays show that three of the four highly-ranked predicted mutants are active yet display lower affinity (18-, 9-, and 13-fold) for the inhibitor. A crystal structure of the top-ranked mutant enzyme validates the predicted conformations of the mutated residues and the structural basis of the loss of potency. The use of protein design algorithms to predict resistance mutations could be incorporated in a lead design strategy against any target that is susceptible to mutational resistance.

  11. Fast search algorithms for computational protein design.

    PubMed

    Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2016-05-01

    One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms. PMID:26833706

  12. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  13. Algorithm design of liquid lens inspection system

    NASA Astrophysics Data System (ADS)

    Hsieh, Lu-Lin; Wang, Chun-Chieh

    2008-08-01

    In mobile lens domain, the glass lens is often to be applied in high-resolution requirement situation; but the glass zoom lens needs to be collocated with movable machinery and voice-coil motor, which usually arises some space limits in minimum design. In high level molding component technology development, the appearance of liquid lens has become the focus of mobile phone and digital camera companies. The liquid lens sets with solid optical lens and driving circuit has replaced the original components. As a result, the volume requirement is decreased to merely 50% of the original design. Besides, with the high focus adjusting speed, low energy requirement, high durability, and low-cost manufacturing process, the liquid lens shows advantages in the competitive market. In the past, authors only need to inspect the scrape defect made by external force for the glass lens. As to the liquid lens, authors need to inspect the state of four different structural layers due to the different design and structure. In this paper, authors apply machine vision and digital image processing technology to administer inspections in the particular layer according to the needs of users. According to our experiment results, the algorithm proposed can automatically delete non-focus background, extract the region of interest, find out and analyze the defects efficiently in the particular layer. In the future, authors will combine the algorithm of the system with automatic-focus technology to implement the inside inspection based on the product inspective demands.

  14. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  15. UWB Tracking System Design with TDOA Algorithm

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan

    2006-01-01

    This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).

  16. An effective hybrid cuckoo search and genetic algorithm for constrained engineering design optimization

    NASA Astrophysics Data System (ADS)

    Kanagaraj, G.; Ponnambalam, S. G.; Jawahar, N.; Mukund Nilakantan, J.

    2014-10-01

    This article presents an effective hybrid cuckoo search and genetic algorithm (HCSGA) for solving engineering design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables. The proposed algorithm, HCSGA, is first applied to 13 standard benchmark constrained optimization functions and subsequently used to solve three well-known design problems reported in the literature. The numerical results obtained by HCSGA show competitive performance with respect to recent algorithms for constrained design optimization problems.

  17. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  18. Application of a genetic algorithm to wind turbine design

    SciTech Connect

    Selig, M.S.; Coverstone-Carroll, V.L.

    1995-09-01

    This paper presents an optimization procedure for stall-regulated horizontal-axis wind-turbines. A hybrid approach is used that combines the advantages of a genetic algorithm and an inverse design method. This method is used to determine the optimum blade pitch and blade chord and twist distributions that maximize the annual energy production. To illustrate the method, a family of 25 wind turbines was designed to examine the sensitivity of annual energy production to changes in the rotor blade length and peak rotor power. Trends are revealed that should aid in the design of new rotors for existing turbines. In the second application, a series of five wind turbines was designed to determine the benefits of specifically tailoring wind turbine blades for the average wind speed at a particular site. The results have important practical implications related to rotors designed for the Midwest versus those where the average wind speed may be greater.

  19. Algorithmic Processes for Increasing Design Efficiency.

    ERIC Educational Resources Information Center

    Terrell, William R.

    1983-01-01

    Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)

  20. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  1. Birefringent filter design by use of a modified genetic algorithm.

    PubMed

    Wen, Mengtao; Yao, Jianping

    2006-06-10

    A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation angles and the element lengths are determined by the genetic algorithm to minimize the sidelobe levels of the filters. Being different from the normal genetic algorithm, the algorithm proposed reduces the problem space of the birefringent filter design to achieve faster speed and better performance. The design of 4-, 8-, and 14-section birefringent filters with an improved sidelobe suppression ratio is realized. A 4-section birefringent filter designed with the algorithm is experimentally realized. PMID:16761031

  2. Specific filter designs for PFBC

    SciTech Connect

    Lippert, T.E.; Bruck, G.J.; Newby, R.A.; Smeltzer, E.E.

    1993-09-01

    Bubbling bed PFBC technology is currently being demonstrated at commercial scale. Economic and performance improvements in these first generation type PFBC plants can be realized with the application of hot gas particulate filters. Both the secondary cyclone(s) and stack gas ESP(s) could be eliminated saving costs and providing lower system pressure losses. The cleaner gas (basically ash free) provided with the hot gas filter, also permits a wider selection of gas turbines with potentially higher performance. For these bubbling bed PFBC applications, the hot gas filter must operate at temperatures of 1580{degree}F and system pressures of 175 psia (conditions typical of the Tidd PFBC plant). Inlet dust loadings to the filter are estimated to be about 500 to 1000 ppm with mass mean particle diameters ranging from 1.5 to 3 {mu}m. For commercial applications typical of the 70 MW{sub e} Tidd PFBC demonstration unit, the filter must treat up to 56,600 acfm of gas flow. Scaleup of this design to about 320 MW{sub e} would require filtering over 160,000 acfm gas flow. For these commercial scale systems, multiple filter vessels are required. Thus, the filter design should be modular for scaling. An alternative to the bubbling bed PFBC is the circulating bed concept. In this process the hot gas filter will in general be exposed to higher operating temperatures (1650{degree}F) and significantly higher (factor of 10 or more) particle loading.

  3. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  4. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  5. Linear vs. function-based dose algorithm designs.

    PubMed

    Stanford, N

    2011-03-01

    The performance requirements prescribed in IEC 62387-1, 2007 recommend linear, additive algorithms for external dosimetry [IEC. Radiation protection instrumentation--passive integrating dosimetry systems for environmental and personal monitoring--Part 1: General characteristics and performance requirements. IEC 62387-1 (2007)]. Neither of the two current standards for performance of external dosimetry in the USA address the additivity of dose results [American National Standards Institute, Inc. American National Standard for dosimetry personnel dosimetry performance criteria for testing. ANSI/HPS N13.11 (2009); Department of Energy. Department of Energy Standard for the performance testing of personnel dosimetry systems. DOE/EH-0027 (1986)]. While there are significant merits to adopting a purely linear solution to estimating doses from multi-element external dosemeters, differences in the standards result in technical as well as perception challenges in designing a single algorithm approach that will satisfy both IEC and USA external dosimetry performance requirements. The dosimetry performance testing standards in the USA do not incorporate type testing, but rely on biennial performance tests to demonstrate proficiency in a wide range of pure and mixed fields. The test results are used exclusively to judge the system proficiency, with no specific requirements on the algorithm design. Technical challenges include mixed beta/photon fields with a beta dose as low as 0.30 mSv mixed with 0.05 mSv of low-energy photons. Perception-based challenges, resulting from over 20 y of experience with this type of performance testing in the USA, include the common belief that the overall quality of the dosemeter performance can be judged from performance to pure fields. This paper presents synthetic testing results from currently accredited function-based algorithms and new developed purely linear algorithms. A comparison of the performance data highlights the benefits of each

  6. GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.

  7. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  8. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    NASA Astrophysics Data System (ADS)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  9. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  10. Aerodynamic optimum design of transonic turbine cascades using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Li, Jun; Feng, Zhenping; Chang, Jianzhong; Shen, Zuda

    1997-06-01

    This paper presents an aerodynamic optimum design method for transonic turbine cascades based on the Genetic Algorithms coupled to the inviscid flow Euler solver and the boundary-layer calculation. The Genetic Algorithms control the evolution of a population of cascades towards an optimum design. The fitness value of each string is evaluated using the flow solver. The design procedure has been developed and the behavior of the genetic algorithms has been tested. The objective functions of the design examples are the minimum mean-square deviation between the aimed pressure and computed pressure and the minimum amount of user expertise.

  11. Engineered waste-package-system design specification

    SciTech Connect

    Not Available

    1983-05-01

    This report documents the waste package performance requirements and geologic and waste form data bases used in developing the conceptual designs for waste packages for salt, tuff, and basalt geologies. The data base reflects the latest geotechnical information on the geologic media of interest. The parameters or characteristics specified primarily cover spent fuel, defense high-level waste, and commercial high-level waste forms. The specification documents the direction taken during the conceptual design activity. A separate design specification will be developed prior to the start of the preliminary design activity.

  12. Conceptual space systems design using meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Byoungsoo

    A recent tendency in designing Space Systems for a specific mission can be described easily and explicitly by the new design-to-cost philosophy, "faster, better, cheaper" (fast-track, innovative, lower-cost, small-sat). This means that Space Systems engineers must do more with less and in less time. This new philosophy can result in space exploration programs with smaller spacecraft, more frequent flights at a remarkably lower cost per flight (cost first, performance second), shorter development schedules, and more focused missions. Some early attempts at "faster, better, cheaper" possibly moved too fast and eliminated critical tests or did not "space-qualify" the innovations, causing failure. A new discipline of Constrained Optimization must be employed. With this new philosophy, Space Systems Design becomes a difficult problem to model in the new, more challenging environment. The objective of Space Systems Design has moved from maximizing space mission performance under weak time and weak cost constraints (accepting schedule slippage and cost growth) but with technology risk constraints, to maximizing mission goals under firm cost and schedule constraints but with prudent technology risk constraints, or, equivalently maximizing "expected" space mission performance per unit cost. Within this mindset, a complex Conceptual Space Systems Design Model was formulated as a (simply bounded) Constrained Combinatorial Optimization Problem with Estimated Total Mission Cost (ETMC) as its objective function to be minimized and subsystems trade-offs and design parameters as the decision variables in its design space, using parametric estimating relationships (PERs) and cost estimating relationships (CERs). Here, given a complex Conceptual Space Systems Design Problem, a (simply bounded) Constrained Combinatorial Optimization "solution" is defined as the process of achieving the most favorable alternative for the system on the basis of objective decision-making evaluation

  13. Design specification for the core management program: COREMAP

    SciTech Connect

    Jones, D.B. , Inc., Campbell, CA )

    1991-04-01

    This report presents the design specifications for the core management program COREMAP. COREMAP is a computer code which performs fuel cycle scoping and preliminary core design calculations for light water reactors. It employs solution techniques which are compatible with existing EPRI methodologies and it includes new methodologies designed to facilitate the analysis effort. The primary neutronic and thermal-hydraulic techniques implemented in COREMAP are derived from the nodal simulation code SIMULATE-E. Code performance is improved by the development of a Spatial Collapsing Algorithm. User interaction is improved by the implementation of many user-convenient features including interactive screens for input specification, detailed error checking, and manual and automated fuel shuffle options. COREMAP is designed as a modular code system using standard data interface files. It is written entirely in FORTRAN-77 and can be implemented on any computer system supporting this language level and ASCII terminals. 16 refs., 10 figs., 7 tabs.

  14. Domain specific software design for decision aiding

    NASA Technical Reports Server (NTRS)

    Keller, Kirby; Stanley, Kevin

    1992-01-01

    McDonnell Aircraft Company (MCAIR) is involved in many large multi-discipline design and development efforts of tactical aircraft. These involve a number of design disciplines that must be coordinated to produce an integrated design and a successful product. Our interpretation of a domain specific software design (DSSD) is that of a representation or framework that is specialized to support a limited problem domain. A DSSD is an abstract software design that is shaped by the problem characteristics. This parallels the theme of object-oriented analysis and design of letting the problem model directly drive the design. The DSSD concept extends the notion of software reusability to include representations or frameworks. It supports the entire software life cycle and specifically leads to improved prototyping capability, supports system integration, and promotes reuse of software designs and supporting frameworks. The example presented in this paper is the task network architecture or design which was developed for the MCAIR Pilot's Associate program. The task network concept supported both module development and system integration within the domain of operator decision aiding. It is presented as an instance where a software design exhibited many of the attributes associated with DSSD concept.

  15. A generalized algorithm to design finite field normal basis multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1986-01-01

    Finite field arithmetic logic is central in the implementation of some error-correcting coders and some cryptographic devices. There is a need for good multiplication algorithms which can be easily realized. Massey and Omura recently developed a new multiplication algorithm for finite fields based on a normal basis representation. Using the normal basis representation, the design of the finite field multiplier is simple and regular. The fundamental design of the Massey-Omura multiplier is based on a design of a product function. In this article, a generalized algorithm to locate a normal basis in a field is first presented. Using this normal basis, an algorithm to construct the product function is then developed. This design does not depend on particular characteristics of the generator polynomial of the field.

  16. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  17. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    NASA Astrophysics Data System (ADS)

    Dominique, Stephane

    The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number

  18. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  19. AbDesign: an algorithm for combinatorial backbone design guided by natural conformations and sequences

    PubMed Central

    Lapidoth, Gideon D.; Baran, Dror; Pszolla, Gabriele M.; Norn, Christoffer; Alon, Assaf; Tyka, Michael D.; Fleishman, Sarel J.

    2016-01-01

    Computational design of protein function has made substantial progress, generating new enzymes, binders, inhibitors, and nanomaterials not previously seen in nature. However, the ability to design new protein backbones for function – essential to exert control over all polypeptide degrees of freedom – remains a critical challenge. Most previous attempts to design new backbones computed the mainchain from scratch. Here, instead, we describe a combinatorial backbone and sequence optimization algorithm called AbDesign, which leverages the large number of sequences and experimentally determined molecular structures of antibodies to construct new antibody models, dock them against target surfaces and optimize their sequence and backbone conformation for high stability and binding affinity. We used the algorithm to produce antibody designs that target the same molecular surfaces as nine natural, high-affinity antibodies; in six the backbone conformation at the core of the antibody binding surface is similar to the natural antibody targets, and in several cases sequence and sidechain conformations recapitulate those seen in the natural antibodies. In the case of an anti-lysozyme antibody, designed antibody CDRs at the periphery of the interface, such as L1 and H2, show a greater backbone conformation diversity than the CDRs at the core of the interface, and increase the binding surface area compared to the natural antibody, which could enhance affinity and specificity. PMID:25670500

  20. Genetic algorithms for the construction of D-optimal designs

    SciTech Connect

    Heredia-Langner, Alejandro; Carlyle, W M.; Montgomery, D C.; Borror, Connie M.; Runger, George C.

    2003-01-01

    Computer-generated designs are useful for situations where standard factorial, fractional factorial or response surface designs cannot be easily employed. Alphabetically-optimal designs are the most widely used type of computer-generated designs, and of these, the D-optimal (or D-efficient) class of designs are extremely popular. D-optimal designs are usually constructed by algorithms that sequentially add and delete points from a potential design based using a candidate set of points spaced over the region of interest. We present a technique to generate D-efficient designs using genetic algorithms (GA). This approach eliminates the need to explicitly consider a candidate set of experimental points and it can handle highly constrained regions while maintaining a level of performance comparable to more traditional design construction techniques.

  1. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    issues in the GA, it is possible to have idle processors. However, as long as the load at each processing node is similar, the processors are kept busy nearly all of the time. In applying GAs to circuit design, a suitable genetic representation 'is that of a circuit-construction program. We discuss one such circuit-construction programming language and show how evolution can generate useful analog circuit designs. This language has the desirable property that virtually all sets of combinations of primitives result in valid circuit graphs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm and circuit simulation software, we present experimental results as applied to three analog filter and two amplifier design tasks. For example, a figure shows an 85 dB amplifier design evolved by our system, and another figure shows the performance of that circuit (gain and frequency response). In all tasks, our system is able to generate circuits that achieve the target specifications.

  2. Application of Simulated Annealing and Related Algorithms to TWTA Design

    NASA Technical Reports Server (NTRS)

    Radke, Eric M.

    2004-01-01

    decremented and the process repeats. Eventually (and hopefully), a near-globally optimal solution is attained as T approaches zero. Several exciting variants of SA have recently emerged, including Discrete-State Simulated Annealing (DSSA) and Simulated Tempering (ST). The DSSA algorithm takes the thermodynamic analogy one step further by categorizing objective function evaluations into discrete states. In doing so, many of the case-specific problems associated with fine-tuning the SA algorithm can be avoided; for example, theoretical approximations for the initial and final temperature can be derived independently of the case. In this manner, DSSA provides a scheme that is more robust with respect to widely differing design surfaces. ST differs from SA in that the temperature T becomes an additional random variable in the optimization. The system is also kept in equilibrium as the temperature changes, as opposed to the system being driven out of equilibrium as temperature changes in SA. ST is designed to overcome obstacles in design surfaces where numerous local minima are separated by high barriers. These algorithms are incorporated into the optimal design of the traveling-wave tube amplifier (TWTA). The area under scrutiny is the collector, in which it would be ideal to use negative potential to decelerate the spent electron beam to zero kinetic energy just as it reaches the collector surface. In reality this is not plausible due to a number of physical limitations, including repulsion and differing levels of kinetic energy among individual electrons. Instead, the collector is designed with multiple stages depressed below ground potential. The design of this multiple-stage collector is the optimization problem of interest. One remaining problem in SA and DSSA is the difficulty in determining when equilibrium has been reached so that the current Markov chain can be terminated. It has been suggested in recent literature that simulating the thermodynamic properties opecific

  3. Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces.

    PubMed

    Dangi, Siddharth; Orsborn, Amy L; Moorman, Helene G; Carmena, Jose M

    2013-07-01

    Closed-loop decoder adaptation (CLDA) is an emerging paradigm for achieving rapid performance improvements in online brain-machine interface (BMI) operation. Designing an effective CLDA algorithm requires making multiple important decisions, including choosing the timescale of adaptation, selecting which decoder parameters to adapt, crafting the corresponding update rules, and designing CLDA parameters. These design choices, combined with the specific settings of CLDA parameters, will directly affect the algorithm's ability to make decoder parameters converge to values that optimize performance. In this article, we present a general framework for the design and analysis of CLDA algorithms and support our results with experimental data of two monkeys performing a BMI task. First, we analyze and compare existing CLDA algorithms to highlight the importance of four critical design elements: the adaptation timescale, selective parameter adaptation, smooth decoder updates, and intuitive CLDA parameters. Second, we introduce mathematical convergence analysis using measures such as mean-squared error and KL divergence as a useful paradigm for evaluating the convergence properties of a prototype CLDA algorithm before experimental testing. By applying these measures to an existing CLDA algorithm, we demonstrate that our convergence analysis is an effective analytical tool that can ultimately inform and improve the design of CLDA algorithms. PMID:23607558

  4. Acoustic design of rotor blades using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Han, A. Y.; Crossley, W. A.

    1995-01-01

    A genetic algorithm coupled with a simplified acoustic analysis was used to generate low-noise rotor blade designs. The model includes thickness, steady loading and blade-vortex interaction noise estimates. The paper presents solutions for several variations in the fitness function, including thickness noise only, loading noise only, and combinations of the noise types. Preliminary results indicate that the analysis provides reasonable assessments of the noise produced, and that genetic algorithm successfully searches for 'good' designs. The results show that, for a given required thrust coefficient, proper blade design can noticeably reduce the noise produced at some expense to the power requirements.

  5. Optimal fractional order PID design via Tabu Search based algorithm.

    PubMed

    Ateş, Abdullah; Yeroglu, Celaleddin

    2016-01-01

    This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method. PMID:26652128

  6. Specification of Selected Performance Monitoring and Commissioning Verification Algorithms for CHP Systems

    SciTech Connect

    Brambley, Michael R.; Katipamula, Srinivas

    2006-10-06

    Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.

  7. An optimal structural design algorithm using optimality criteria

    NASA Technical Reports Server (NTRS)

    Taylor, J. E.; Rossow, M. P.

    1976-01-01

    An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.

  8. A VLSI design concept for parallel iterative algorithms

    NASA Astrophysics Data System (ADS)

    Sun, C. C.; Götze, J.

    2009-05-01

    Modern VLSI manufacturing technology has kept shrinking down to the nanoscale level with a very fast trend. Integration with the advanced nano-technology now makes it possible to realize advanced parallel iterative algorithms directly which was almost impossible 10 years ago. In this paper, we want to discuss the influences of evolving VLSI technologies for iterative algorithms and present design strategies from an algorithmic and architectural point of view. Implementing an iterative algorithm on a multiprocessor array, there is a trade-off between the performance/complexity of processors and the load/throughput of interconnects. This is due to the behavior of iterative algorithms. For example, we could simplify the parallel implementation of the iterative algorithm (i.e., processor elements of the multiprocessor array) in any way as long as the convergence is guaranteed. However, the modification of the algorithm (processors) usually increases the number of required iterations which also means that the switch activity of interconnects is increasing. As an example we show that a 25×25 full Jacobi EVD array could be realized into one single FPGA device with the simplified μ-rotation CORDIC architecture.

  9. A robust Feasible Directions algorithm for design synthesis

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1983-01-01

    A nonlinear optimization algorithm is developed which combines the best features of the Method of Feasible Directions and the Generalized Reduced Gradient Method. This algorithm utilizes the direction-finding sub-problem from the Method of Feasible Directions to find a search direction which is equivalent to that of the Generalized Reduced Gradient Method, but does not require the addition of a large number of slack variables associated with inequality constraints. This method provides a core-efficient algorithm for the solution of optimization problems with a large number of inequality constraints. Further optimization efficiency is derived by introducing the concept of infrequent gradient calculations. In addition, it is found that the sensitivity of the optimum design to changes in the problem parameters can be obtained using this method without the need for second derivatives or Lagrange multipliers. A numerical example is given in order to demonstrate the efficiency of the algorithm and the sensitivity analysis.

  10. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms

    PubMed Central

    Garro, Beatriz A.; Vázquez, Roberto A.

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132

  11. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    PubMed

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132

  12. Optimal Design of Geodetic Network Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Vajedian, Sanaz; Bagheri, Hosein

    2010-05-01

    A geodetic network is a network which is measured exactly by techniques of terrestrial surveying based on measurement of angles and distances and can control stability of dams, towers and their around lands and can monitor deformation of surfaces. The main goals of an optimal geodetic network design process include finding proper location of control station (First order Design) as well as proper weight of observations (second order observation) in a way that satisfy all the criteria considered for quality of the network with itself is evaluated by the network's accuracy, reliability (internal and external), sensitivity and cost. The first-order design problem, can be dealt with as a numeric optimization problem. In this designing finding unknown coordinates of network stations is an important issue. For finding these unknown values, network geodetic observations that are angle and distance measurements must be entered in an adjustment method. In this regard, using inverse problem algorithms is needed. Inverse problem algorithms are methods to find optimal solutions for given problems and include classical and evolutionary computations. The classical approaches are analytical methods and are useful in finding the optimum solution of a continuous and differentiable function. Least squares (LS) method is one of the classical techniques that derive estimates for stochastic variables and their distribution parameters from observed samples. The evolutionary algorithms are adaptive procedures of optimization and search that find solutions to problems inspired by the mechanisms of natural evolution. These methods generate new points in the search space by applying operators to current points and statistically moving toward more optimal places in the search space. Genetic algorithm (GA) is an evolutionary algorithm considered in this paper. This algorithm starts with definition of initial population, and then the operators of selection, replication and variation are applied

  13. On constructing optimistic simulation algorithms for the discrete event system specification

    SciTech Connect

    Nutaro, James J

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.

  14. Design of synthetic biological logic circuits based on evolutionary algorithm.

    PubMed

    Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei

    2013-08-01

    The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose. PMID:23919952

  15. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  16. OSPREY: Protein Design with Ensembles, Flexibility, and Provable Algorithms

    PubMed Central

    Gainza, Pablo; Roberts, Kyle E.; Georgiev, Ivelin; Lilien, Ryan H.; Keedy, Daniel A.; Chen, Cheng-Yu; Reza, Faisal; Anderson, Amy C.; Richardson, David C.; Richardson, Jane S.; Donald, Bruce R.

    2013-01-01

    Summary We have developed a suite of protein redesign algorithms that improves realistic in silico modeling of proteins. These algorithms are based on three characteristics that make them unique: (1) improved flexibility of the protein backbone, protein side chains, and ligand to accurately capture the conformational changes that are induced by mutations to the protein sequence; (2) modeling of proteins and ligands as ensembles of low-energy structures to better approximate binding affinity; and (3) a globally-optimal protein design search, guaranteeing that the computational predictions are optimal with respect to the input model. Here, we illustrate the importance of these three characteristics. We then describe OSPREY, a protein redesign suite that implements our protein design algorithms. OSPREY has been used prospectively, with experimental validation, in several biomedically-relevant settings. We show in detail how OSPREY has been used to predict resistance mutations and explain why improved flexibility, ensembles, and provability are essential for this application. PMID:23422427

  17. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  18. Distributed genetic algorithms for the floorplan design problem

    NASA Technical Reports Server (NTRS)

    Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.

    1991-01-01

    Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.

  19. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    PubMed

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  20. Sampling design for classifying contaminant level using annealing search algorithms

    NASA Astrophysics Data System (ADS)

    Christakos, George; Killam, Bart R.

    1993-12-01

    A stochastic method for sampling spatially distributed contaminant level is presented. The purpose of sampling is to partition the contaminated region into zones of high and low pollutant concentration levels. In particular, given an initial set of observations of a contaminant within a site, it is desired to find a set of additional sampling locations in a way that takes into consideration the spatial variability characteristics of the site and optimizes certain objective functions emerging from the physical, regulatory and monetary considerations of the specific site cleanup process. Since the interest is in classifying the domain into zones above and below a pollutant threshold level, a natural criterion is the cost of misclassification. The resulting objective function is the expected value of a spatial loss function associated with sampling. Stochastic expectation involves the joint probability distribution of the pollutant level and its estimate, where the latter is calculated by means of spatial estimation techniques. Actual computation requires the discretization of the contaminated domain. As a consequence, any reasonably sized problem results in combinatorics precluding an exhaustive search. The use of an annealing algorithm, although suboptimal, can find a good set of future sampling locations quickly and efficiently. In order to obtain insight about the parameters and the computational requirements of the method, an example is discussed in detail. The implementation of spatial sampling design in practice will provide the model inputs necessary for waste site remediation, groundwater management, and environmental decision making.

  1. USING GENETIC ALGORITHMS TO DESIGN ENVIRONMENTALLY FRIENDLY PROCESSES

    EPA Science Inventory

    Genetic algorithm calculations are applied to the design of chemical processes to achieve improvements in environmental and economic performance. By finding the set of Pareto (i.e., non-dominated) solutions one can see how different objectives, such as environmental and economic ...

  2. Designing an Algorithm Animation System To Support Instructional Tasks.

    ERIC Educational Resources Information Center

    Hamilton-Taylor, Ashley George; Kraemer, Eileen

    2002-01-01

    The authors are conducting a study of instructors teaching data structure and algorithm topics, with a focus on the use of diagrams and tracing. The results of this study are being used to inform the design of the Support Kit for Animation (SKA). This article describes a preliminary version of SKA, and possible usage scenarios. (Author/AEF)

  3. Optical design with the aid of a genetic algorithm.

    PubMed

    van Leijenhorst, D C; Lucasius, C B; Thijssen, J M

    1996-01-01

    Natural evolution is widely accepted as being the process underlying the design and optimization of the sensory functions of biological organisms. Using a genetic algorithm, this process is extended to the automatic optimization and design of optical systems, e.g. as used in astronomical telescopes. The results of this feasibility study indicate that various types of aberrations can be corrected quickly and simultaneously, even on small computers. PMID:8924643

  4. Optimal design of plasmonic waveguide using multiobjective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jung, Jaehoon

    2016-01-01

    An approach for multiobjective optimal design of a plasmonic waveguide is presented. We use a multiobjective extension of a genetic algorithm to find the Pareto-optimal geometries. The design variables are the geometrical parameters of the waveguide. The objective functions are chosen as the figure of merit defined as the ratio between the propagation distance and effective mode size and the normalized coupling length between adjacent waveguides at the telecom wavelength of 1550 nm.

  5. Space shuttle configuration accounting functional design specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.

  6. A new collage steganographic algorithm using cartoon design

    NASA Astrophysics Data System (ADS)

    Yi, Shuang; Zhou, Yicong; Pun, Chi-Man; Chen, C. L. Philip

    2014-02-01

    Existing collage steganographic methods suffer from low payload of embedding messages. To improve the payload while providing a high level of security protection to messages, this paper introduces a new collage steganographic algorithm using cartoon design. It embeds messages into the least significant bits (LSBs) of color cartoon objects, applies different permutations to each object, and adds objects to a cartoon cover image to obtain the stego image. Computer simulations and comparisons demonstrate that the proposed algorithm shows significantly higher capacity of embedding messages compared with existing collage steganographic methods.

  7. A task-specific approach to computational imaging system design

    NASA Astrophysics Data System (ADS)

    Ashok, Amit

    The traditional approach to imaging system design places the sole burden of image formation on optical components. In contrast, a computational imaging system relies on a combination of optics and post-processing to produce the final image and/or output measurement. Therefore, the joint-optimization (JO) of the optical and the post-processing degrees of freedom plays a critical role in the design of computational imaging systems. The JO framework also allows us to incorporate task-specific performance measures to optimize an imaging system for a specific task. In this dissertation, we consider the design of computational imaging systems within a JO framework for two separate tasks: object reconstruction and iris-recognition. The goal of these design studies is to optimize the imaging system to overcome the performance degradations introduced by under-sampled image measurements. Within the JO framework, we engineer the optical point spread function (PSF) of the imager, representing the optical degrees of freedom, in conjunction with the post-processing algorithm parameters to maximize the task performance. For the object reconstruction task, the optimized imaging system achieves a 50% improvement in resolution and nearly 20% lower reconstruction root-mean-square-error (RMSE) as compared to the un-optimized imaging system. For the iris-recognition task, the optimized imaging system achieves a 33% improvement in false rejection ratio (FRR) for a fixed alarm ratio (FAR) relative to the conventional imaging system. The effect of the performance measures like resolution, RMSE, FRR, and FAR on the optimal design highlights the crucial role of task-specific design metrics in the JO framework. We introduce a fundamental measure of task-specific performance known as task-specific information (TSI), an information-theoretic measure that quantifies the information content of an image measurement relevant to a specific task. A variety of source-models are derived to illustrate

  8. Penetrator reliability investigation and design exploration : from conventional design processes to innovative uncertainty-capturing algorithms.

    SciTech Connect

    Martinez-Canales, Monica L.; Heaphy, Robert; Gramacy, Robert B.; Taddy, Matt; Chiesa, Michael L.; Thomas, Stephen W.; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Trucano, Timothy Guy; Gray, Genetha Anne

    2006-11-01

    This project focused on research and algorithmic development in optimization under uncertainty (OUU) problems driven by earth penetrator (EP) designs. While taking into account uncertainty, we addressed three challenges in current simulation-based engineering design and analysis processes. The first challenge required leveraging small local samples, already constructed by optimization algorithms, to build effective surrogate models. We used Gaussian Process (GP) models to construct these surrogates. We developed two OUU algorithms using 'local' GPs (OUU-LGP) and one OUU algorithm using 'global' GPs (OUU-GGP) that appear competitive or better than current methods. The second challenge was to develop a methodical design process based on multi-resolution, multi-fidelity models. We developed a Multi-Fidelity Bayesian Auto-regressive process (MF-BAP). The third challenge involved the development of tools that are computational feasible and accessible. We created MATLAB{reg_sign} and initial DAKOTA implementations of our algorithms.

  9. Design of transonic airfoils and wings using a hybrid design algorithm

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Smith, Leigh A.

    1987-01-01

    A method has been developed for designing airfoils and wings at transonic speeds. It utilizes a hybrid design algorithm in an iterative predictor/corrector approach, alternating between analysis code and a design module. This method has been successfully applied to a variety of airfoil and wing design problems, including both transport and highly-swept fighter wing configurations. An efficient approach to viscous airfoild design and the effect of including static aeroelastic deflections in the wing design process are also illustrated.

  10. Design of SPARC V8 superscalar pipeline applied Tomasulo's algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Yu, Lixin; Feng, Yunkai

    2014-04-01

    A superscalar pipeline applied Tomasulo's algorithm is presented in this paper. The design begins with a dual-issue superscalar processor based on LEON2. Tomasulo's algorithm is adopted to implement out-of-order execution. Instructions are separated into three different parts and executed by three different function units so as to reduce area and promote execution speed. Results wrote back into registers are still in program order, for the aim of ensure the function veracity. Mechanisms of the reservation station, common data bus, and reorder buffer are presented in detail. The structure can sends and executes three instructions at most at a time. Branch prediction can also be realized by reorder buffer. Performance of the scalar pipeline applied Tomasulo's algorithm is promoted by 41.31% compared to singleissue pipeline..

  11. Full design of fuzzy controllers using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Homaifar, Abdollah; Mccormick, ED

    1992-01-01

    This paper examines the applicability of genetic algorithms (GA) in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.

  12. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  13. A disturbance based control/structure design algorithm

    NASA Technical Reports Server (NTRS)

    Mclaren, Mark D.; Slater, Gary L.

    1989-01-01

    Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.

  14. Designing a competent simple genetic algorithm for search and optimization

    NASA Astrophysics Data System (ADS)

    Reed, Patrick; Minsker, Barbara; Goldberg, David E.

    2000-12-01

    Simple genetic algorithms have been used to solve many water resources problems, but specifying the parameters that control how adaptive search is performed can be a difficult and time-consuming trial-and-error process. However, theoretical relationships for population sizing and timescale analysis have been developed that can provide pragmatic tools for vastly limiting the number of parameter combinations that must be considered. The purpose of this technical note is to summarize these relationships for the water resources community and to illustrate their practical utility in a long-term groundwater monitoring design application. These relationships, which model the effects of the primary operators of a simple genetic algorithm (selection, recombination, and mutation), provide a highly efficient method for ensuring convergence to near-optimal or optimal solutions. Application of the method to a monitoring design test case identified robust parameter values using only three trial runs.

  15. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  16. Orthogonalizing EM: A design-based least squares algorithm

    PubMed Central

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

    2016-01-01

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

  17. Robust Optimization Design Algorithm for High-Frequency TWTs

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  18. Optimization of experimental design in fMRI: a general framework using a genetic algorithm.

    PubMed

    Wager, Tor D; Nichols, Thomas E

    2003-02-01

    This article describes a method for selecting design parameters and a particular sequence of events in fMRI so as to maximize statistical power and psychological validity. Our approach uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect to single or multiple measures of fitness. Two strengths of the GA framework are that (1) it operates with any sort of model, allowing for very specific parameterization of experimental conditions, including nonstandard trial types and experimentally observed scanner autocorrelation, and (2) it is flexible with respect to fitness criteria, allowing optimization over known or novel fitness measures. We describe how genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore the space of possible fMRI design parameters, with the goal of providing information about optimal design choices for several types of designs. In our simulations, we considered three fitness measures: contrast estimation efficiency, hemodynamic response estimation efficiency, and design counterbalancing. Although there are inherent trade-offs between these three fitness measures, GA optimization can produce designs that outperform random designs on all three criteria simultaneously. PMID:12595184

  19. Optimal brushless DC motor design using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Rahideh, A.; Korakianitis, T.; Ruiz, P.; Keeble, T.; Rothman, M. T.

    2010-11-01

    This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using a genetic algorithm. Characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. Electrical and mechanical requirements (i.e. voltage, torque and speed) and other limitations (e.g. upper and lower limits of the motor geometries) are cast into constraints of the optimization problem. One sample case is used to illustrate the design and optimization technique.

  20. Thrust vector control algorithm design for the Cassini spacecraft

    NASA Technical Reports Server (NTRS)

    Enright, Paul J.

    1993-01-01

    This paper describes a preliminary design of the thrust vector control algorithm for the interplanetary spacecraft, Cassini. Topics of discussion include flight software architecture, modeling of sensors, actuators, and vehicle dynamics, and controller design and analysis via classical methods. Special attention is paid to potential interactions with structural flexibilities and propellant dynamics. Controller performance is evaluated in a simulation environment built around a multi-body dynamics model, which contains nonlinear models of the relevant hardware and preliminary versions of supporting attitude determination and control functions.

  1. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  2. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  3. Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard

    2005-01-01

    Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.

  4. An efficient parallel algorithm for accelerating computational protein design

    PubMed Central

    Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang

    2014-01-01

    Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991

  5. Compiler writing system detail design specification. Volume 1: Language specification

    NASA Technical Reports Server (NTRS)

    Arthur, W. J.

    1974-01-01

    Construction within the Meta language for both language and target machine specification is reported. The elements of the function language as a meaning and syntax are presented, and the structure of the target language is described which represents the target dependent object text representation of applications programs.

  6. Algorithm development for the control design of flexible structures

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.

    1983-01-01

    The critical problems associated with the control of highly damped flexible structures are outlined. The practical problems include: high performance; assembly in space, configuration changes; on-line controller software design; and lack of test data. Underlying all of these problems is the central problem of modeling errors. To justify the expense of a space structure, the performance requirements will necessarily be very severe. On the other hand, the absence of economical tests precludes the availability of reliable data before flight. A design algorithm is offered which: (1) provides damping for a larger number of modes than the optimal attitude controller controls; (2) coordinates the rate of feedback design with the attitude control design by use of a similar cost function; and (3) provides model reduction and controller reduction decisions which are systematically connected to the mathematical statement of the control objectives and the disturbance models.

  7. Conceptual space Systems Design using Meta-Heuristic Algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Byoungsoo; Morgenthaler, George W.

    2002-01-01

    easily and explicitly by new design-to-cost philosophy, "faster, better, cheaper" (fast-track, innovative, lower-cost, small-sat). The objective of the Space Systems Design has moved from maximizing space mission performance under weak time and cost constraints (almost regardless of cost) but with technology risk constraints, to maximizing mission goals under cost and schedule constraints but with prudent technology risk constraints, or maximizing space mission performance per unit cost. Within this mindset, Conceptual Space Systems Design models were formulated as constrained combinatorial optimization problems with estimated Total Mission Cost (TMC) as its objective function to be minimized and subsystems trade-offs as decision variables in its design space, using parametric estimating relationships (PERs) and cost estimating relationships (CERs).Here a constrained combinatorial optimized "solution" is defined as achieving the most favorable alternative for the system on the basis of the decision-making design criteria. Two non-traditional meta-heuristic optimization algorithms, Genetic Algorithms (GAs) and Simulated Annealing (SA), were used to solve the formulated combinatorial optimization model for the Conceptual Space Systems Design. GAs and SA were demonstrated on SAMPEX. The model simulation statistics show that the estimated TMCs obtained by GAs and SA are statistically equal and consistent. These statistics also show that Conceptual Space Systems Design Model can be used as a guidance tool to evaluate and validate space research proposals. Also, the non-traditional meta-heuristic constrained optimization techniques, GAs and SA, can be applied to all manner of space, civil or commercial design problems.

  8. Neural-network-biased genetic algorithms for materials design

    NASA Astrophysics Data System (ADS)

    Patra, Tarak; Meenakshisundaram, Venkatesh; Simmons, David

    Machine learning tools have been progressively adopted by the materials science community to accelerate design of materials with targeted properties. However, in the search for new materials exhibiting properties and performance beyond that previously achieved, machine learning approaches are frequently limited by two major shortcomings. First, they are intrinsically interpolative. They are therefore better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require the availability of large datasets, which in some fields are not available and would be prohibitively expensive to produce. Here we describe a new strategy for combining genetic algorithms, neural networks and other machine learning tools, and molecular simulation to discover materials with extremal properties in the absence of pre-existing data. Predictions from progressively constructed machine learning tools are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct molecular dynamics simulation. We survey several initial materials design problems we have addressed with this framework and compare its performance to that of standard genetic algorithm approaches. We acknowledge the W. M. Keck Foundation for support of this work.

  9. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  10. Design principles and algorithms for automated air traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  11. Design Principles and Algorithms for Air Traffic Arrival Scheduling

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Itoh, Eri

    2014-01-01

    This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.

  12. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 6 2013-10-01 2013-10-01 false Separator: Design specification. 162.050-21 Section 162... MATERIALS: SPECIFICATIONS AND APPROVAL ENGINEERING EQUIPMENT Pollution Prevention Equipment § 162.050-21 Separator: Design specification. (a) A separator must be designed to operate in each plane that forms...

  13. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 6 2012-10-01 2012-10-01 false Separator: Design specification. 162.050-21 Section 162... MATERIALS: SPECIFICATIONS AND APPROVAL ENGINEERING EQUIPMENT Pollution Prevention Equipment § 162.050-21 Separator: Design specification. (a) A separator must be designed to operate in each plane that forms...

  14. Algorithm To Design Finite-Field Normal-Basis Multipliers

    NASA Technical Reports Server (NTRS)

    Wang, Charles C.

    1988-01-01

    Way found to exploit Massey-Omura multiplication algorithm. Generalized algorithm locates normal basis in Galois filed GF(2m) and enables development of another algorithm to construct product function.

  15. Bias and design in software specifications

    NASA Technical Reports Server (NTRS)

    Straub, Pablo A.; Zelkowitz, Marvin V.

    1990-01-01

    Implementation bias in a specification is an arbitrary constraint in the solution space. Presented here is a model of bias in software specifications. Bias is defined in terms of the specification process and a classification of the attributes of the software product. Our definition of bias provides insight into both the origin and the consequences of bias. It also shows that bias is relative and essentially unavoidable. Finally, we describe current work on defining a measure of bias, formalizing our model, and relating bias to software defects.

  16. Evolutionary Design of Rule Changing Artificial Society Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Yun; Kanoh, Hitoshi

    Socioeconomic phenomena, cultural progress and political organization have recently been studied by creating artificial societies consisting of simulated agents. In this paper we propose a new method to design action rules of agents in artificial society that can realize given requests using genetic algorithms (GAs). In this paper we propose an efficient method for designing the action rules of agents that will constitute an artificial society that meets a specified demand by using a GAs. In the proposed method, each chromosome in the GA population represents a candidate set of action rules and the number of rule iterations. While a conventional method applies distinct rules in order of precedence, the present method applies a set of rules repeatedly for a certain period. The present method is aiming at both firm evolution of agent population and continuous action by that. Experimental results using the artificial society proved that the present method can generate artificial society which fills a demand in high probability.

  17. Optimal Design of RF Energy Harvesting Device Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Mori, T.; Sato, Y.; Adriano, R.; Igarashi, H.

    2015-11-01

    This paper presents optimal design of an RF energy harvesting device using genetic algorithm (GA). In the present RF harvester, a planar spiral antenna (PSA) is loaded with matching and rectifying circuits. On the first stage of the optimal design, the shape parameters of PSA are optimized using . Then, the equivalent circuit of the optimized PSA is derived for optimization of the circuits. Finally, the parameters of RF energy harvesting circuit are optimized to maximize the output power using GA. It is shown that the present optimization increases the output power by a factor of five. The manufactured energy harvester starts working when the input electric field is greater than 0.5 V/m.

  18. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 6 2014-10-01 2014-10-01 false Separator: Design specification. 162.050-21 Section 162... Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an.... (c) Each separator component that is a moving part must be designed so that its movement...

  19. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Separator: Design specification. 162.050-21 Section 162... Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an.... (c) Each separator component that is a moving part must be designed so that its movement...

  20. Computational Tools and Algorithms for Designing Customized Synthetic Genes

    PubMed Central

    Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris

    2014-01-01

    Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050

  1. Computational tools and algorithms for designing customized synthetic genes.

    PubMed

    Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris

    2014-01-01

    Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050

  2. As-built design specification for MISMAP

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Cheng, D. E.; Tompkins, M. A. (Principal Investigator)

    1981-01-01

    The MISMAP program, which is part of the CLASFYT package, is described. The program is designed to compare classification values with ground truth values for a segment and produce a comparison map and summary table.

  3. Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2015-06-01

    Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of "unintelligent design"; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning.

  4. IMCS reflight certification requirements and design specifications

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The requirements for reflight certification are established. Software requirements encompass the software programs that are resident in the PCC, DEP, PDSS, EC, or any related GSE. A design approach for the reflight software packages is recommended. These designs will be of sufficient detail to permit the implementation of reflight software. The PDSS/IMC Reflight Certification system provides the tools and mechanisms for the user to perform the reflight certification test procedures, test data capture, test data display, and test data analysis. The system as defined will be structured to permit maximum automation of reflight certification procedures and test data analysis.

  5. Genetic algorithm to optimize the design of main combustor and gas generator in liquid rocket engines

    NASA Astrophysics Data System (ADS)

    Son, Min; Ko, Sangho; Koo, Jaye

    2014-06-01

    A genetic algorithm was used to develop optimal design methods for the regenerative cooled combustor and fuel-rich gas generator of a liquid rocket engine. For the combustor design, a chemical equilibrium analysis was applied, and the profile was calculated using Rao's method. One-dimensional heat transfer was assumed along the profile, and cooling channels were designed. For the gas-generator design, non-equilibrium properties were derived from a counterflow analysis, and a vaporization model for the fuel droplet was adopted to calculate residence time. Finally, a genetic algorithm was adopted to optimize the designs. The combustor and gas generator were optimally designed for 30-tonf, 75-tonf, and 150-tonf engines. The optimized combustors demonstrated superior design characteristics when compared with previous non-optimized results. Wall temperatures at the nozzle throat were optimized to satisfy the requirement of 800 K, and specific impulses were maximized. In addition, the target turbine power and a burned-gas temperature of 1000 K were obtained from the optimized gas-generator design.

  6. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  7. Biocatalyst design for stability and specificity

    SciTech Connect

    Himmel, M.E.; Georgiou, G.

    1991-01-01

    This volume has been developed from a symposium sponsored by the Division of Biochemical Technology of the American Chemical Society at the Fourth Chemical Congress of North America (202nd National Meeting of the American Chemical Society), held in New York, New York, August 25-30, 1991. Papers included here relate to the development of biocatalysts, with an emphasis on the stability and specificity of the catalysts. Major topics of these papers include enzymes, biotechnology, protein engineering, and protein folding.

  8. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  9. Algorithm design for a gun simulator based on image processing

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wei, Ping; Ke, Jun

    2015-08-01

    In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.

  10. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  11. Controller design based on μ analysis and PSO algorithm.

    PubMed

    Lari, Ali; Khosravi, Alireza; Rajabi, Farshad

    2014-03-01

    In this paper an evolutionary algorithm is employed to address the controller design problem based on μ analysis. Conventional solutions to μ synthesis problem such as D-K iteration method often lead to high order, impractical controllers. In the proposed approach, a constrained optimization problem based on μ analysis is defined and then an evolutionary approach is employed to solve the optimization problem. The goal is to achieve a more practical controller with lower order. A benchmark system named two-tank system is considered to evaluate performance of the proposed approach. Simulation results show that the proposed controller performs more effective than high order H(∞) controller and has close responses to the high order D-K iteration controller as the common solution to μ synthesis problem. PMID:24314832

  12. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 6 2012-10-01 2012-10-01 false Cargo monitor: Design specification. 162.050-25 Section..., AND MATERIALS: SPECIFICATIONS AND APPROVAL ENGINEERING EQUIPMENT Pollution Prevention Equipment § 162.050-25 Cargo monitor: Design specification. (a) This section contains requirements that apply to...

  13. 46 CFR 162.050-33 - Bilge alarm: Design specification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 6 2012-10-01 2012-10-01 false Bilge alarm: Design specification. 162.050-33 Section..., AND MATERIALS: SPECIFICATIONS AND APPROVAL ENGINEERING EQUIPMENT Pollution Prevention Equipment § 162.050-33 Bilge alarm: Design specification. (a) This section contains requirements that apply to...

  14. 46 CFR 162.050-33 - Bilge alarm: Design specification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 6 2013-10-01 2013-10-01 false Bilge alarm: Design specification. 162.050-33 Section..., AND MATERIALS: SPECIFICATIONS AND APPROVAL ENGINEERING EQUIPMENT Pollution Prevention Equipment § 162.050-33 Bilge alarm: Design specification. (a) This section contains requirements that apply to...

  15. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 6 2013-10-01 2013-10-01 false Cargo monitor: Design specification. 162.050-25 Section..., AND MATERIALS: SPECIFICATIONS AND APPROVAL ENGINEERING EQUIPMENT Pollution Prevention Equipment § 162.050-25 Cargo monitor: Design specification. (a) This section contains requirements that apply to...

  16. The Hierarchical Specification and Mechanical Verification of the SIFT Design

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The formal specification and proof methodology employed to demonstrate that the SIFT computer system meets its requirements are described. The hierarchy of design specifications is shown, from very abstract descriptions of system function down to the implementation. The most abstract design specifications are simple and easy to understand, almost all details of the realization were abstracted out, and are used to ensure that the system functions reliably and as intended. A succession of lower level specifications refines these specifications into more detailed, and more complex, views of the system design, culminating in the Pascal implementation. The section describes the rigorous mechanical proof that the abstract specifications are satisfied by the actual implementation.

  17. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

    PubMed

    Veladi, H

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  18. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  19. 46 CFR 162.050-33 - Bilge alarm: Design specification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 6 2014-10-01 2014-10-01 false Bilge alarm: Design specification. 162.050-33 Section....050-33 Bilge alarm: Design specification. (a) This section contains requirements that apply to bilge alarms. (b) Each bilge alarm must be designed to meet the requirements for an oil content meter in §...

  20. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 6 2014-10-01 2014-10-01 false Cargo monitor: Design specification. 162.050-25 Section....050-25 Cargo monitor: Design specification. (a) This section contains requirements that apply to cargo monitors. (b) Each monitor must be designed so that it is calibrated by a means that does not...

  1. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Cargo monitor: Design specification. 162.050-25 Section....050-25 Cargo monitor: Design specification. (a) This section contains requirements that apply to cargo monitors. (b) Each monitor must be designed so that it is calibrated by a means that does not...

  2. 46 CFR 162.050-33 - Bilge alarm: Design specification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Bilge alarm: Design specification. 162.050-33 Section....050-33 Bilge alarm: Design specification. (a) This section contains requirements that apply to bilge alarms. (b) Each bilge alarm must be designed to meet the requirements for an oil content meter in §...

  3. Sum-of-squares-based fuzzy controller design using quantum-inspired evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Gwo-Ruey; Huang, Yu-Chia; Cheng, Chih-Yung

    2016-07-01

    In the field of fuzzy control, control gains are obtained by solving stabilisation conditions in linear-matrix-inequality-based Takagi-Sugeno fuzzy control method and sum-of-squares-based polynomial fuzzy control method. However, the optimal performance requirements are not considered under those stabilisation conditions. In order to handle specific performance problems, this paper proposes a novel design procedure with regard to polynomial fuzzy controllers using quantum-inspired evolutionary algorithms. The first contribution of this paper is a combination of polynomial fuzzy control and quantum-inspired evolutionary algorithms to undertake an optimal performance controller design. The second contribution is the proposed stability condition derived from the polynomial Lyapunov function. The proposed design approach is dissimilar to the traditional approach, in which control gains are obtained by solving the stabilisation conditions. The first step of the controller design uses the quantum-inspired evolutionary algorithms to determine the control gains with the best performance. Then, the stability of the closed-loop system is analysed under the proposed stability conditions. To illustrate effectiveness and validity, the problem of balancing and the up-swing of an inverted pendulum on a cart is used.

  4. AFEII Analog Front End Board Design Specifications

    SciTech Connect

    Rubinov, Paul; /Fermilab

    2005-04-01

    This document describes the design of the 2nd iteration of the Analog Front End Board (AFEII), which has the function of receiving charge signals from the Central Fiber Tracker (CFT) and providing digital hit pattern and charge amplitude information from those charge signals. This second iteration is intended to address limitations of the current AFE (referred to as AFEI in this document). These limitations become increasingly deleterious to the performance of the Central Fiber Tracker as instantaneous luminosity increases. The limitations are inherent in the design of the key front end chips on the AFEI board (the SVXIIe and the SIFT) and the architecture of the board itself. The key limitations of the AFEI are: (1) SVX saturation; (2) Discriminator to analog readout cross talk; (3) Tick to tick pedestal variation; and (4) Channel to channel pedestal variation. The new version of the AFE board, AFEII, addresses these limitations by use of a new chip, the TriP-t and by architectural changes, while retaining the well understood and desirable features of the AFEI board.

  5. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an angle of 22.5° with the plane of its normal operating position. (b) The electrical components of...

  6. OPTIMIZATION OF DESIGN SPECIFICATIONS FOR LARGE DRY COOLING SYSTEMS

    EPA Science Inventory

    The report presents a methodology for optimizing design specifications of large, mechanical-draft, dry cooling systems. A multivariate, nonlinear, constrained optimization technique searches for the combination of design variables to determine the cooling system with the lowest a...

  7. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  8. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  9. A homogeneous superconducting magnet design using a hybrid optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ni, Zhipeng; Wang, Qiuliang; Liu, Feng; Yan, Luguang

    2013-12-01

    This paper employs a hybrid optimization algorithm with a combination of linear programming (LP) and nonlinear programming (NLP) to design the highly homogeneous superconducting magnets for magnetic resonance imaging (MRI). The whole work is divided into two stages. The first LP stage provides a global optimal current map with several non-zero current clusters, and the mathematical model for the LP was updated by taking into account the maximum axial and radial magnetic field strength limitations. In the second NLP stage, the non-zero current clusters were discretized into practical solenoids. The superconducting conductor consumption was set as the objective function both in the LP and NLP stages to minimize the construction cost. In addition, the peak-peak homogeneity over the volume of imaging (VOI), the scope of 5 Gauss fringe field, and maximum magnetic field strength within superconducting coils were set as constraints. The detailed design process for a dedicated 3.0 T animal MRI scanner was presented. The homogeneous magnet produces a magnetic field quality of 6.0 ppm peak-peak homogeneity over a 16 cm by 18 cm elliptical VOI, and the 5 Gauss fringe field was limited within a 1.5 m by 2.0 m elliptical region.

  10. Orbit design and estimation for surveillance missions using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Abdelkhalik, Osama Mohamed Omar

    2005-11-01

    The problem of observing a given set of Earth target sites within an assigned time frame is examined. Attention is given mainly to visiting these sites as sub-satellite nadir points. Solutions to this problem in the literature require thrusters to continuously maneuver the satellite from one site to another. A natural solution is proposed. A natural solution is a gravitational orbit that enables the spacecraft to satisfy the mission requirements without maneuvering. Optimization of a penalty function is performed to find natural solutions for satellite orbit configurations. This penalty function depends on the mission objectives. Two mission objectives are considered: maximum observation time and maximum resolution. The penalty function poses multi minima and a genetic algorithm technique is used to solve this problem. In the case that there is no one orbit satisfying the mission requirements, a multi-orbit solution is proposed. In a multi-orbit solution, the set of target sites is split into two groups. Then the developed algorithm is used to search for a natural solution for each group. The satellite has to be maneuvered between the two solution orbits. Genetic algorithms are used to find the optimal orbit transfer between the two orbits using impulsive thrusters. A new formulation for solving the orbit maneuver problem using genetic algorithms is developed. The developed formulation searches for a minimum fuel consumption maneuver and guarantees that the satellite will be transferred exactly to the final orbit even if the solution is non-optimal. The results obtained demonstrate the feasibility of finding natural solutions for many case studies. The problem of the design of suitable satellite constellation for Earth observing applications is addressed. Two cases are considered. The first is the remote sensing missions for a particular region with high frequency and small swath width. The second is the interferometry radar Earth observation missions. In satellite

  11. A Design Procedure for the Applications-Specific Electric Motors

    NASA Astrophysics Data System (ADS)

    Hoshino, Akihiro; Isobe, Shin-Ichi; Morimoto, Masayuki; Kosaka, Takashi; Matsui, Nobuyuki

    A design procedure for the Applications-Specific Electric Motors (ASEM) is proposed. The proposed design procedure is relevant to the design of the permanent magnet synchronous motor which fulfills required typical operating points under the restrictions of dimensions and the power source conditions. The design procedure is composed of two stages, a rough design and an accurate design. A rough design finds a permissible area of the combination of motor constants which satisfy the given typical operating points under the given power source conditions. According to the obtained permissible area of motor constants, an accurate design achieves the detailed motor design determining the dimensions, the winding specifications and constituent materials. Among several designed motors, one with highest fitness from standpoints of high efficiency, manufacturability and cost is finally selected. The experimental studies show that the designed motor using the proposed procedure satisfies the requirements in the target application.

  12. Design of Protein-Protein Interactions with a Novel Ensemble-Based Scoring Algorithm

    NASA Astrophysics Data System (ADS)

    Roberts, Kyle E.; Cushing, Patrick R.; Boisguerin, Prisca; Madden, Dean R.; Donald, Bruce R.

    Protein-protein interactions (PPIs) are vital for cell signaling, protein trafficking and localization, gene expression, and many other biological functions. Rational modification of PPI targets provides a mechanism to understand their function and importance. However, PPI systems often have many more degrees of freedom and flexibility than the small-molecule binding sites typically targeted by protein design algorithms. To handle these challenging design systems, we have built upon the computational protein design algorithm K * [8,19] to develop a new design algorithm to study protein-protein and protein-peptide interactions. We validated our algorithm through the design and experimental testing of novel peptide inhibitors.

  13. Global and Local Optimization Algorithms for Optimal Signal Set Design

    PubMed Central

    Kearsley, Anthony J.

    2001-01-01

    The problem of choosing an optimal signal set for non-Gaussian detection was reduced to a smooth inequality constrained mini-max nonlinear programming problem by Gockenbach and Kearsley. Here we consider the application of several optimization algorithms, both global and local, to this problem. The most promising results are obtained when special-purpose sequential quadratic programming (SQP) algorithms are embedded into stochastic global algorithms.

  14. Gateway design specification for fiber optic local area networks

    NASA Technical Reports Server (NTRS)

    1985-01-01

    This is a Design Specification for a gateway to interconnect fiber optic local area networks (LAN's). The internetworking protocols for a gateway device that will interconnect multiple local area networks are defined. This specification serves as input for preparation of detailed design specifications for the hardware and software of a gateway device. General characteristics to be incorporated in the gateway such as node address mapping, packet fragmentation, and gateway routing features are described.

  15. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  16. UXO Engineering Design. Technical Specification and ConceptualDesign

    SciTech Connect

    Beche, J-F.; Doolittle, L.; Greer, J.; Lafever, R.; Radding, Z.; Ratti, A.; Yaver, H.; Zimmermann, S.

    2005-04-23

    The design and fabrication of the UXO detector has numerous challenges and is an important component to the success of this study. This section describes the overall engineering approach, as well as some of the technical details that brought us to the present design. In general, an array of sensor coils is measuring the signal generated by the UXO object in response to a stimulation provided by the driver coil. The information related to the location, shape and properties of the object is derived from the analysis of the measured data. Each sensor coil is instrumented with a waveform digitizer operating at a nominal digitization rate of 100 kSamples per second. The sensor coils record both the large transient pulse of the driver coil and the UXO object response pulse. The latter is smaller in amplitude and must be extracted from the large transient signal. The resolution required is 16 bits over a dynamic range of at least 140 dB. The useful signal bandwidth of the application extends from DC to 40 kHz. The low distortion of each component is crucial in order to maintain an excellent linearity over the full dynamic range and to minimize the calibration procedure. The electronics must be made as compact as possible so that the response of its metallic parts has a minimum signature response. Also because of a field system portability requirement, the power consumption of the instrument must be kept as low as possible. The theory and results of numerical and experimental studies that led to the proof-of-principle multitransmitter-multireceiver Active ElectroMagnetic (AEM) system, that can not only accurately detect but also characterize and discriminate UXO targets, are summarized in LBNL report-53962: ''Detection and Classification of Buried Metallic Objects, UX-1225''.

  17. Designing Stochastic Optimization Algorithms for Real-world Applications

    NASA Astrophysics Data System (ADS)

    Someya, Hiroshi; Handa, Hisashi; Koakutsu, Seiichi

    This article presents a review of recent advances in stochastic optimization algorithms. Novel algorithms achieving highly adaptive and efficient searches, theoretical analyses to deepen our understanding of search behavior, successful implementation on parallel computers, attempts to build benchmark suites for industrial use, and techniques applied to real-world problems are included. A list of resources is provided.

  18. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  19. Transitioning from conceptual design to construction performance specification

    NASA Astrophysics Data System (ADS)

    Jeffers, Paul; Warner, Mark; Craig, Simon; Hubbard, Robert; Marshall, Heather

    2012-09-01

    On successful completion of a conceptual design review by a funding agency or customer, there is a transition phase before construction contracts can be placed. The nature of this transition phase depends on the Project's approach to construction and the particular subsystem being considered. There are generically two approaches; project retention of design authority and issuance of build to print contracts, or issuance of subsystem performance specifications with controlled interfaces. This paper relates to the latter where a proof of concept (conceptual or reference design) is translated into performance based sub-system specifications for competitive tender. This translation is not a straightforward process and there are a number of different issues to consider in the process. This paper deals with primarily the Telescope mount and Enclosure subsystems. The main subjects considered in this paper are: • Typical status of design at Conceptual Design Review compared with the desired status of Specifications and Interface Control Documents at Request for Quotation. • Options for capture and tracking of system requirements flow down from science / operating requirements and sub-system requirements, and functional requirements derived from reference design. • Requirements that may come specifically from the contracting approach. • Methods for effective use of reference design work without compromising a performance based specification. • Management of project team's expectation relating to design. • Effects on cost estimates from reference design to actual. This paper is based on experience and lessons learned through this process on both the VISTA and the ATST projects.

  20. Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    NASA Technical Reports Server (NTRS)

    Keller, Richard M. (Editor); Barstow, David; Lowry, Michael R.; Tong, Christopher H.

    1992-01-01

    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface.

  1. SEPAC flight software detailed design specifications, volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The detailed design specifications (as built) for the SEPAC Flight Software are defined. The design includes a description of the total software system and of each individual module within the system. The design specifications describe the decomposition of the software system into its major components. The system structure is expressed in the following forms: the control-flow hierarchy of the system, the data-flow structure of the system, the task hierarchy, the memory structure, and the software to hardware configuration mapping. The component design description includes details on the following elements: register conventions, module (subroutines) invocaton, module functions, interrupt servicing, data definitions, and database structure.

  2. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.

  3. The potential of genetic algorithms for conceptual design of rotor systems

    NASA Technical Reports Server (NTRS)

    Crossley, William A.; Wells, Valana L.; Laananen, David H.

    1993-01-01

    The capabilities of genetic algorithms as a non-calculus based, global search method make them potentially useful in the conceptual design of rotor systems. Coupling reasonably simple analysis tools to the genetic algorithm was accomplished, and the resulting program was used to generate designs for rotor systems to match requirements similar to those of both an existing helicopter and a proposed helicopter design. This provides a comparison with the existing design and also provides insight into the potential of genetic algorithms in design of new rotors.

  4. An efficient algorithm for systematic analysis of nucleotide strings suitable for siRNA design

    PubMed Central

    2011-01-01

    Background The "off-target" silencing effect hinders the development of siRNA-based therapeutic and research applications. Existing solutions for finding possible locations of siRNA seats within a large database of genes are either too slow, miss a portion of the targets, or are simply not designed to handle a very large number of queries. We propose a new approach that reduces the computational time as compared to existing techniques. Findings The proposed method employs tree-based storage in a form of a modified truncated suffix tree to sort all possible short string substrings within given set of strings (i.e. transcriptome). Using the new algorithm, we pre-computed a list of the best siRNA locations within each human gene ("siRNA seats"). siRNAs designed to reside within siRNA seats are less likely to hybridize off-target. These siRNA seats could be used as an input for the traditional "set-of-rules" type of siRNA designing software. The list of siRNA seats is available through a publicly available database located at http://web.cos.gmu.edu/~gmanyam/siRNA_db/search.php Conclusions In attempt to perform top-down prediction of the human siRNA with minimized off-target hybridization, we developed an efficient algorithm that employs suffix tree based storage of the substrings. Applications of this approach are not limited to optimal siRNA design, but can also be useful for other tasks involving selection of the characteristic strings specific to individual genes. These strings could then be used as siRNA seats, as specific probes for gene expression studies by oligonucleotide-based microarrays, for the design of molecular beacon probes for Real-Time PCR and, generally, any type of PCR primers. PMID:21619643

  5. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  6. Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...

  7. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.

    2016-01-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230

  8. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  9. Subsemble: an ensemble method for combining subset-specific algorithm fits

    PubMed Central

    Sapp, Stephanie; van der Laan, Mark J.; Canny, John

    2013-01-01

    Ensemble methods using the same underlying algorithm trained on different subsets of observations have recently received increased attention as practical prediction tools for massive datasets. We propose Subsemble: a general subset ensemble prediction method, which can be used for small, moderate, or large datasets. Subsemble partitions the full dataset into subsets of observations, fits a specified underlying algorithm on each subset, and uses a clever form of V-fold cross-validation to output a prediction function that combines the subset-specific fits. We give an oracle result that provides a theoretical performance guarantee for Subsemble. Through simulations, we demonstrate that Subsemble can be a beneficial tool for small to moderate sized datasets, and often has better prediction performance than the underlying algorithm fit just once on the full dataset. We also describe how to include Subsemble as a candidate in a SuperLearner library, providing a practical way to evaluate the performance of Subsemlbe relative to the underlying algorithm fit just once on the full dataset. PMID:24778462

  10. A Proposed India-Specific Algorithm for Management of Type 2 Diabetes.

    PubMed

    2016-06-01

    Several algorithms and guidelines have been proposed by countries and international professional bodies; however, no recent updated management algorithm is available for Asian Indians. Specifically, algorithms developed and validated in developed nations may not be relevant or applicable to patients in India because of several factors: early age of onset of diabetes, occurrence of diabetes in nonobese and sometimes lean people, differences in the relative contributions of insulin resistance and β-cell dysfunction, marked postprandial glycemia, frequent infections including tuberculosis, low access to healthcare and medications in people of low socioeconomic stratum, ethnic dietary practices (e.g., ingestion of high-carbohydrate diets), and inadequate education regarding hypoglycemia. All these factors should be considered to choose appropriate therapeutic option in this population. The proposed algorithm is simple, suggests less expensive drugs, and tries to provide an effective and comprehensive framework for delivery of diabetes therapy in primary care in India. The proposed guidelines agree with international recommendations in favoring individualization of therapeutic targets as well as modalities of treatment in a flexible manner suitable to the Indian population. PMID:26909751

  11. A Learning Design Ontology Based on the IMS Specification

    ERIC Educational Resources Information Center

    Amorim, Ricardo R.; Lama, Manuel; Sanchez, Eduardo; Riera, Adolfo; Vila, Xose A.

    2006-01-01

    In this paper, we present an ontology to represent the semantics of the IMS Learning Design (IMS LD) specification, a meta-language used to describe the main elements of the learning design process. The motivation of this work relies on the expressiveness limitations found on the current XML-Schema implementation of the IMS LD conceptual model. To…

  12. Using a Genetic Algorithm to Design Nuclear Electric Spacecraft

    NASA Technical Reports Server (NTRS)

    Pannell, William P.

    2003-01-01

    The basic approach to to design nuclear electric spacecraft is to generate a group of candidate designs, see how "fit" the design are, and carry best design forward to the next generation. Some designs eliminated, some randomly modified and carried forward.

  13. Designing a mirrored Howland circuit with a particle swarm optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Bertemes-Filho, Pedro; Negri, Lucas H.; Vincence, Volney C.

    2016-06-01

    Electrical impedance spectroscopy usually requires a wide bandwidth current source with high output impedance. Non-idealities of the operational amplifier (op-amp) degrade its performance. This work presents a particle swarm algorithm for extracting the main AC characteristics of the op-amp used to design a mirrored modified Howland current source circuit which satisfies both the output current and the impedance spectra required. User specifications were accommodated. Both resistive and biological loads were used in the simulations. The results showed that the algorithm can correctly identify the open-loop gain and the input and output resistance of the op-amp which best fit the performance requirements of the circuit. It was also shown that the higher the open-loop gain corner frequency the higher the output impedance of the circuit. The algorithm could be a powerful tool for developing a desirable current source for different bioimpedance medical and clinical applications, such as cancer tissue characterisation and tissue cell measurements.

  14. Utilizing the Hotelling template as a tool for CT image reconstruction algorithm design

    NASA Astrophysics Data System (ADS)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2012-02-01

    Design of image reconstruction algorithms for CT can be significantly aided by useful metrics of image quality. Useful metrics, however, are difficult to develop due to the high-dimensionality of the CT imaging system, lack of spatial invariance in the imaging system, and a high degree of correlation among the image voxels. Although true task-based evaluation on realistic imaging tasks can be time-consuming, and a given task may be insensitive to the image reconstruction algorithm, task-based metrics can still prove useful in many contexts. For example, model observers that mimic performance of the imaging system on specific tasks can provide a low-dimensional measure of image quality while still accounting for many of the salient properties of the system and object being scanned. In this work, ideal observer performance is computed on a single detection task. The modeled signal for detection is taken to be very small - size on the order of a detector bin - and inspection of the accompanying Hotelling template is suggested. We hypothesize that improved detection on small signals may be sensitive to the reconstruction algorithm. Further, we hypothesize that structurally simple Hotelling templates may correlate with high human observer performance.

  15. High-Resolution Snow Projections for Alaska: Regionally and seasonally specific algorithms

    NASA Astrophysics Data System (ADS)

    McAfee, S. A.; Walsh, J. E.; Rupp, S. T.

    2012-12-01

    The fate of Alaska's snow in a warmer world is of both scientific and practical concern. Snow projections are critical for understanding glacier mass balance, forest demographic changes, and for natural resource planning and decision making - such as hydropower facilities in southern and southeastern portions of the state and winter road construction and use in the northern portions. To meet this need, we have developed a set of regionally and seasonally specific statistical models relating long-term average snow-day fraction from average monthly temperature in Alaska. The algorithms were based on temperature data and on daily precipitation and snowfall occurrence for 104 stations from the Global Historical Climatology Network. Although numerous models exist for estimating snow fraction from temperature, the algorithms we present here provide substantial improvements for Alaska. There are fundamental differences in the synoptic conditions across the state, and specific algorithms can accommodate this variability in the relationship between average monthly temperature and typical conditions during snowfall, rainfall, and dry spells. In addition, this set of simple algorithms, unlike more complex physically based models, can be easily and efficiently applied to a large number of future temperature trajectories, facilitating scenario-based planning approaches. Model fits are quite good, with mean errors of the snow-day fractions at most stations within 0.1 of the observed values, which range from 0 to 1, although larger average errors do occur at some sites during the transition seasons. Errors at specific stations are often stable in terms of sign and magnitude across the snowy season, suggesting that site-specific conditions can drive consistent deviations from mean regional conditions. Applying these algorithms to the gridded temperature projections downscaled by the Scenarios Network for Alaska and Arctic Planning, allows us to provide decadal estimates of changes

  16. A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme

    2014-03-01

    Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

  17. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  18. DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...

  19. 3D-design exploration of CNN algorithms

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    2011-05-01

    Multi-dimensional algorithms are hard to implement on classical platforms. Pipelining may exploit instruction-level parallelism, but not in the presence of simultaneous data; threads optimize only within the given restrictions. Tiled architectures do add a dimension to the solution space. With locally a large register store, data parallelism is handled, but only to a dimension. 3-D technologies are meant to add a dimension in the realization. Applied on the device level, it makes each computational node smaller. The interconnections become shorter and hence the network will be condensed. Such advantages will be easily lost at higher implementation levels unless 3-D technologies as multi-cores or chip stacking are also introduced. 3-D technologies scale in space, where (partial) reconfiguration scales in time. The optimal selection over the various implementation levels is algorithm dependent. The paper discusses such principles while applied on the scaling of cellular neural networks (CNN). It illustrates how stacking of reconfigurable chips supports many algorithmic requirements in a defect-insensitive manner. Further the paper explores the potential of chip stacking for multi-modal implementations in a reconfigurable approach to heterogeneous architectures for algorithm domains.

  20. High pressure humidification columns: Design equations, algorithm, and computer code

    SciTech Connect

    Enick, R.M.; Klara, S.M.; Marano, J.J.

    1994-07-01

    This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.

  1. Designing Domain-Specific HUMS Architectures: An Automated Approach

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Agarwal, Neha; Kumar, Pramod; Sundaram, Parthiban

    2004-01-01

    The HUMS automation system automates the design of HUMS architectures. The automated design process involves selection of solutions from a large space of designs as well as pure synthesis of designs. Hence the whole objective is to efficiently search for or synthesize designs or parts of designs in the database and to integrate them to form the entire system design. The automation system adopts two approaches in order to produce the designs: (a) Bottom-up approach and (b) Top down approach. Both the approaches are endowed with a Suite of quantitative and quantitative techniques that enable a) the selection of matching component instances, b) the determination of design parameters, c) the evaluation of candidate designs at component-level and at system-level, d) the performance of cost-benefit analyses, e) the performance of trade-off analyses, etc. In short, the automation system attempts to capitalize on the knowledge developed from years of experience in engineering, system design and operation of the HUMS systems in order to economically produce the most optimal and domain-specific designs.

  2. Algorithm Of Revitalization Programme Design For Housing Estates

    NASA Astrophysics Data System (ADS)

    Ostańska, Anna

    2015-09-01

    Demographic problems, obsolescence of existing buildings, unstable economy, as well as misunderstanding of the mechanism that turn city quarters into areas in need for intervention result in the implementation of improvement measures that prove inadequate. The paper puts forward an algorithm of revitalization program for housing developments and presents its implementation. It also showed the effects of periodically run (10 years) three-way diagnostic tests in correlation with the concept of settlement management.

  3. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  4. Design specifications for manufacturability of MCM-C multichip modules

    SciTech Connect

    Allen, C.; Blazek, R.; Desch, J.; Elarton, J.; Kautz, D.; Markley, D.; Morgenstern, H.; Stewart, R.; Warner, L.

    1995-06-01

    The scope of this document is to establish design guidelines for electronic circuitry packaged as multichip modules of the ceramic substrate variety, although many of these guidelines are applicable to other types of multichip modules. The guidelines begin with prerequisite information which must be developed between customer and designer of the multichip module. The core of the guidelines focuses on the many considerations that must be addressed during the multichip module design. The guidelines conclude with the resulting deliverables from the design which satisfy customer requirements and/or support the multichip module fabrication and testing processes. Considerable supporting information, checklists, and design constraints are captured in specific appendices and used as reference information in the main body text. Finally some real examples of multichip module design are presented.

  5. Use of Algorithm of Changes for Optimal Design of Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.

    2010-05-01

    For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.

  6. Design Genetic Algorithm Optimization Education Software Based Fuzzy Controller for a Tricopter Fly Path Planning

    ERIC Educational Resources Information Center

    Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao

    2016-01-01

    In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…

  7. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy.

    PubMed

    Schuemann, J; Dowdell, S; Grassberger, C; Min, C H; Paganetti, H

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2

  8. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be

  9. Specification, Design, and Analysis of Advanced HUMS Architectures

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    2004-01-01

    During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the

  10. Evaluation of a segmentation algorithm designed for an FPGA implementation

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Schönermark, Maria; Huber, Felix

    2013-10-01

    The present work has to be seen in the context of real-time on-board image evaluation of optical satellite data. With on board image evaluation more useful data can be acquired, the time to get requested information can be decreased and new real-time applications are possible. Because of the relative high processing power in comparison to the low power consumption, Field Programmable Gate Array (FPGA) technology has been chosen as an adequate hardware platform for image processing tasks. One fundamental part for image evaluation is image segmentation. It is a basic tool to extract spatial image information which is very important for many applications such as object detection. Therefore a special segmentation algorithm using the advantages of FPGA technology has been developed. The aim of this work is the evaluation of this algorithm. Segmentation evaluation is a difficult task. The most common way for evaluating the performance of a segmentation method is still subjective evaluation, in which human experts determine the quality of a segmentation. This way is not in compliance with our needs. The evaluation process has to provide a reasonable quality assessment, should be objective, easy to interpret and simple to execute. To reach these requirements a so called Segmentation Accuracy Equality norm (SA EQ) was created, which compares the difference of two segmentation results. It can be shown that this norm is capable as a first quality measurement. Due to its objectivity and simplicity the algorithm has been tested on a specially chosen synthetic test model. In this work the most important results of the quality assessment will be presented.

  11. Design of broadband omnidirectional antireflection coatings using ant colony algorithm.

    PubMed

    Guo, X; Zhou, H Y; Guo, S; Luan, X X; Cui, W K; Ma, Y F; Shi, L

    2014-06-30

    Optimization method which is based on the ant colony algorithm (ACA) is described to optimize antireflection (AR) coating system with broadband omnidirectional characteristics for silicon solar cells incorporated with the solar spectrum (AM1.5 radiation). It's the first time to use ACA method for optimizing the AR coating system. In this paper, for the wavelength range from 400 nm to 1100 nm, the optimized three-layer AR coating system could provide an average reflectance of 2.98% for incident angles from Raveθ+ to 80° and 6.56% for incident angles from 0° to 90°. PMID:24978076

  12. Comparing State-of-the-Art Evolutionary Multi-Objective Algorithms for Long-Term Groundwater Monitoring Design

    NASA Astrophysics Data System (ADS)

    Reed, P. M.; Kollat, J. B.

    2005-12-01

    This study demonstrates the effectiveness of a modified version of Deb's Non-Dominated Sorted Genetic Algorithm II (NSGAII), which the authors have named the Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (Epsilon-NSGAII), at solving a four objective long-term groundwater monitoring (LTM) design test case. The Epsilon-NSGAII incorporates prior theoretical competent evolutionary algorithm (EA) design concepts and epsilon-dominance archiving to improve the original NSGAII's efficiency, reliability, and ease-of-use. This algorithm eliminates much of the traditional trial-and-error parameterization associated with evolutionary multi-objective optimization (EMO) through epsilon-dominance archiving, dynamic population sizing, and automatic termination. The effectiveness and reliability of the new algorithm is compared to the original NSGAII as well as two other benchmark multi-objective evolutionary algorithms (MOEAs), the Epsilon-Dominance Multi-Objective Evolutionary Algorithm (Epsilon-MOEA) and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). These MOEAs have been selected because they have been demonstrated to be highly effective at solving numerous multi-objective problems. The results presented in this study indicate superior performance of the Epsilon-NSGAII in terms of the hypervolume indicator, unary Epsilon-indicator, and first-order empirical attainment function metrics. In addition, the runtime metric results indicate that the diversity and convergence dynamics of the Epsilon-NSGAII are competitive to superior relative to the SPEA2, with both algorithms greatly outperforming the NSGAII and Epsilon-MOEA in terms of these metrics. The improvements in performance of the Epsilon-NSGAII over its parent algorithm the NSGAII demonstrate that the application of Epsilon-dominance archiving, dynamic population sizing with archive injection, and automatic termination greatly improve algorithm efficiency and reliability. In addition, the usability of

  13. An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1999-01-01

    The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.

  14. INCORPORATING ENVIRONMENTAL AND ECONOMIC CONSIDERATIONS INTO PROCESS DESIGN: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...

  15. NASA software specification and evaluation system design, part 2

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A survey and analysis of the existing methods, tools and techniques employed in the development of software are presented along with recommendations for the construction of reliable software. Functional designs for software specification language, and the data base verifier are presented.

  16. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    PubMed Central

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-01-01

    The purpose of this study was to investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for 7 disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head & neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and Monte Carlo algorithms to obtain the average range differences (ARD) and root mean square deviation (RMSD) for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation (ADD) of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing Monte Carlo dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head & neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head & neck treatments. We conclude that currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific

  17. Experiences with the hydraulic design of the high specific speed Francis turbine

    NASA Astrophysics Data System (ADS)

    Obrovsky, J.; Zouhar, J.

    2014-03-01

    The high specific speed Francis turbine is still suitable alternative for refurbishment of older hydro power plants with lower heads and worse cavitation conditions. In the paper the design process of such kind of turbine together with the results comparison of homological model tests performed in hydraulic laboratory of ČKD Blansko Engineering is introduced. The turbine runner was designed using the optimization algorithm and considering the high specific speed hydraulic profile. It means that hydraulic profiles of the spiral case, the distributor and the draft tube were used from a Kaplan turbine. The optimization was done as the automatic cycle and was based on a simplex optimization method as well as on a genetic algorithm. The number of blades is shown as the parameter which changes the resulting specific speed of the turbine between ns=425 to 455 together with the cavitation characteristics. Minimizing of cavitation on the blade surface as well as on the inlet edge of the runner blade was taken into account during the design process. The results of CFD analyses as well as the model tests are mentioned in the paper.

  18. An effective algorithm for the generation of patient-specific Purkinje networks in computational electrocardiology

    NASA Astrophysics Data System (ADS)

    Palamara, Simone; Vergara, Christian; Faggiano, Elena; Nobile, Fabio

    2015-02-01

    The Purkinje network is responsible for the fast and coordinated distribution of the electrical impulse in the ventricle that triggers its contraction. Therefore, it is necessary to model its presence to obtain an accurate patient-specific model of the ventricular electrical activation. In this paper, we present an efficient algorithm for the generation of a patient-specific Purkinje network, driven by measures of the electrical activation acquired on the endocardium. The proposed method provides a correction of an initial network, generated by means of a fractal law, and it is based on the solution of Eikonal problems both in the muscle and in the Purkinje network. We present several numerical results both in an ideal geometry with synthetic data and in a real geometry with patient-specific clinical measures. These results highlight an improvement of the accuracy provided by the patient-specific Purkinje network with respect to the initial one. In particular, a cross-validation test shows an accuracy increase of 19% when only the 3% of the total points are used to generate the network, whereas an increment of 44% is observed when a random noise equal to 20% of the maximum value of the clinical data is added to the measures.

  19. A design guide and specification for small explosive containment structures

    SciTech Connect

    Marchand, K.A.; Cox, P.A.; Polcyn, M.A.

    1994-12-01

    The design of structural containments for testing small explosive devices requires the designer to consider the various aspects of the explosive loading, i.e., shock and gas or quasistatic pressure. Additionally, if the explosive charge has the potential of producing damaging fragments, provisions must be made to arrest the fragments. This may require that the explosive be packed in a fragment attenuating material, which also will affect the loads predicted for containment response. Material also may be added just to attenuate shock, in the absence of fragments. Three charge weights are used in the design. The actual charge is used to determine a design fragment. Blast loads are determined for a {open_quotes}design charge{close_quotes}, defined as 125% of the operational charge in the explosive device. No yielding is permitted at the design charge weight. Blast loads are also determined for an over-charge, defined as 200% of the operational charge in the explosive device. Yielding, but no failure, is permitted at this over-charge. This guide emphasizes the calculation of loads and fragments for which the containment must be designed. The designer has the option of using simplified or complex design-analysis methods. Examples in the guide use readily available single degree-of-freedom (sdof) methods, plus static methods for equivalent dynamic loads. These are the common methods for blast resistant design. Some discussion of more complex methods is included. Generally, the designer who chooses more complex methods must be fully knowledgeable in their use and limitations. Finally, newly fabricated containments initially must be proof tested to 125% of the operational load and then inspected at regular intervals. This specification provides guidance for design, proof testing, and inspection of small explosive containment structures.

  20. A hybrid algorithm for transonic airfoil and wing design

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Smith, Leigh A.

    1987-01-01

    The present method for the design of transonic airfoils and wings employs a predictor/corrector approach in which an analysis code calculates the flowfield for an initial geometry, then modifies it on the basis of the difference between calculated and target pressures. This allows the design method to be straightforwardly coupled with any existing analysis code, as presently undertaken with several two- and three-dimensional potential flow codes. The results obtained indicate that the method is robust and accurate, even in the cases of airfoils with strongly supercritical flow and shocks. The design codes are noted to require computational resources typical of current pure-inverse methods.

  1. Vision-based vehicle detection and tracking algorithm design

    NASA Astrophysics Data System (ADS)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  2. The design of flux-corrected transport (FCT) algorithms on structured grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    2005-12-01

    A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow field; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order algorithms, in flux form, in the various regions of the flow field. In this dissertation, we describe a set of design principles that significantly enhance the accuracy and robustness of FCT algorithms by enhancing the accuracy and robustness of each of the three components individually. These principles include the use of very high order spatial operators in the design of the high order fluxes, the use of non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. We show via standard test problems the kind of algorithm performance one can expect if these design principles are adhered to. We give examples of applications of these design principles in several areas of physics. Finally, we compare the performance of these enhanced algorithms with that of other recent front-capturing methods.

  3. General Structure Design for Fast Image Processing Algorithms Based upon FPGA DSP Slice

    NASA Astrophysics Data System (ADS)

    Wasfy, Wael; Zheng, Hong

    Increasing the speed and accuracy for a fast image processing algorithms during computing the image intensity for low level 3x3 algorithms with different kernel but having the same parallel calculation method is our target to achieve in this paper. FPGA is one of the fastest embedded systems that can be used for implementing the fast image processing image algorithms by using DSP slice module inside the FPGA we aimed to get the advantage of the DSP slice as a faster, accurate, higher number of bits in calculations and different calculated equation maneuver capabilities. Using a higher number of bits during algorithm calculations will lead to a higher accuracy compared with using the same image algorithm calculations with less number of bits, also reducing FPGA resources as minimum as we can and according to algorithm calculations needs is a very important goal to achieve. So in the recommended design we used as minimum DSP slice as we can and as a benefit of using DSP slice is higher calculations accuracy as the DSP capabilities of having 48 bit accuracy in addition and 18 x 18 bit accuracy in multiplication. For proofing the design, Gaussian filter and Sobelx edge detector image processing algorithms have been chosen to be implemented. Also we made a comparison with another design for proofing the improvements of the accuracy and speed of calculations, the other design as will be mentioned later on this paper is using maximum 12 bit accuracy in adding or multiplying calculations.

  4. Optimisation of the design of shell and double concentric tubes heat exchanger using the Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Baadache, Khireddine; Bougriou, Chérif

    2015-10-01

    This paper presents the use of Genetic Algorithm in the sizing of the shell and double concentric tube heat exchanger where the objective function is the total cost which is the sum of the capital cost of the device and the operating cost. The use of the techno-economic methods based on the optimisation methods of heat exchangers sizing allow to have a device that satisfies the technical specification with the lowest possible levels of operating and investment costs. The logarithmic mean temperature difference method was used for the calculation of the heat exchange area. This new heat exchanger is more profitable and more economic than the old heat exchanger, the total cost decreased of about 13.16 % what represents 7,250.8 euro of the lump sum. The design modifications and the use of the Genetic Algorithm for the sizing also allow to improve the compactness of the heat exchanger, the study showed that the latter can increase the heat transfer surface area per unit volume until 340 m2/m3.

  5. Design of Clinical Support Systems Using Integrated Genetic Algorithm and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Fu; Huang, Yung-Fa; Jiang, Xiaoyi; Hsu, Yuan-Nian; Lin, Hsuan-Hung

    Clinical decision support system (CDSS) provides knowledge and specific information for clinicians to enhance diagnostic efficiency and improving healthcare quality. An appropriate CDSS can highly elevate patient safety, improve healthcare quality, and increase cost-effectiveness. Support vector machine (SVM) is believed to be superior to traditional statistical and neural network classifiers. However, it is critical to determine suitable combination of SVM parameters regarding classification performance. Genetic algorithm (GA) can find optimal solution within an acceptable time, and is faster than greedy algorithm with exhaustive searching strategy. By taking the advantage of GA in quickly selecting the salient features and adjusting SVM parameters, a method using integrated GA and SVM (IGS), which is different from the traditional method with GA used for feature selection and SVM for classification, was used to design CDSSs for prediction of successful ventilation weaning, diagnosis of patients with severe obstructive sleep apnea, and discrimination of different cell types form Pap smear. The results show that IGS is better than methods using SVM alone or linear discriminator.

  6. Internal circulating fluidized bed incineration system and design algorithm.

    PubMed

    Tian, W D; Wei, X L; Li, J; Sheng, H Z

    2001-04-01

    The internal circulating fluidized bed (ICFB) system is characterized with fast combustion, low emission, uniformity of bed temperature and controllability of combustion process. It is a kind of novel clean combustion system, especially for the low-grade fuels, such as municipal solid waste (MSW). The experimental systems of ICFB with and without combustion were designed and set up in this paper. A series of experiments were carried out for further understanding combustion process and characteristics of several design parameters for MSW. Based on the results, a design routine for the ICFB system was suggested for the calculation of energy balance, airflow rate, heat transfer rate, and geometry arrangement. A test system with ICFB combustor has been set up and the test results show that the design of the ICFB system is successful. PMID:11590739

  7. On the importance of FIB-SEM specific segmentation algorithms for porous media

    SciTech Connect

    Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  8. Design of Protein Multi-specificity Using an Independent Sequence Search Reduces the Barrier to Low Energy Sequences.

    PubMed

    Sevy, Alexander M; Jacobs, Tim M; Crowe, James E; Meiler, Jens

    2015-07-01

    Computational protein design has found great success in engineering proteins for thermodynamic stability, binding specificity, or enzymatic activity in a 'single state' design (SSD) paradigm. Multi-specificity design (MSD), on the other hand, involves considering the stability of multiple protein states simultaneously. We have developed a novel MSD algorithm, which we refer to as REstrained CONvergence in multi-specificity design (RECON). The algorithm allows each state to adopt its own sequence throughout the design process rather than enforcing a single sequence on all states. Convergence to a single sequence is encouraged through an incrementally increasing convergence restraint for corresponding positions. Compared to MSD algorithms that enforce (constrain) an identical sequence on all states the energy landscape is simplified, which accelerates the search drastically. As a result, RECON can readily be used in simulations with a flexible protein backbone. We have benchmarked RECON on two design tasks. First, we designed antibodies derived from a common germline gene against their diverse targets to assess recovery of the germline, polyspecific sequence. Second, we design "promiscuous", polyspecific proteins against all binding partners and measure recovery of the native sequence. We show that RECON is able to efficiently recover native-like, biologically relevant sequences in this diverse set of protein complexes. PMID:26147100

  9. Design optimization of a high specific speed Francis turbine runner

    NASA Astrophysics Data System (ADS)

    Enomoto, Y.; Kurosawa, S.; Kawajiri, H.

    2012-11-01

    Francis turbine is used in many hydroelectric power stations. This paper presents the development of hydraulic performance in a high specific speed Francis turbine runner. In order to achieve the improvements of turbine efficiency throughout a wide operating range, a new runner design method which combines the latest Computational Fluid Dynamics (CFD) and a multi objective optimization method with an existing design system was applied in this study. The validity of the new design system was evaluated by model performance tests. As the results, it was confirmed that the optimized runner presented higher efficiency compared with an originally designed runner. Besides optimization of runner, instability vibration which occurred at high part load operating condition was investigated by model test and gas-liquid two-phase flow analysis. As the results, it was confirmed that the instability vibration was caused by oval cross section whirl which was caused by recirculation flow near runner cone wall.

  10. Analysis of novel low specific speed pump designs

    NASA Astrophysics Data System (ADS)

    Klas, R.; Pochylý, F.; Rudolf, P.

    2014-03-01

    Centrifugal pumps with very low specific speed present significant design challenges. Narrow blade channels, large surface area of hub and shroud discs relative to the blade area, and the presence of significant of blade channel vortices are typical features linked with the difficulty to achieve head and efficiency requirements for such designs. This paper presents an investigation of two novel designs of very low specific speed impellers: impeller having blades with very thick trailing edges and impeller with thick trailing edges and recirculating channels, which are bored along the impeller circumference. Numerical simulations and experimental measurements were used to study the flow dynamics of those new designs. It was shown that thick trailing edges suppress local eddies in the blade channels and decrease energy dissipation due to excessive swirling. Furthermore the recirculating channels will increase the circumferential velocity component on impeller outlet thus increasing the specific energy, albeit adversely affecting the hydraulic efficiency. Analysis of the energy dissipation in the volute showed that the number of the recirculating channels, their geometry and location, all have significant impact on the magnitude of dissipated energy and its distribution which in turn influences the shape of the head curve and the stability of the pump operation. Energy dissipation within whole pump interior (blade channels, volute, rotor- stator gaps) was also studied.

  11. Design of a gradient-index beam shaping system via a genetic algorithm optimization method

    NASA Astrophysics Data System (ADS)

    Evans, Neal C.; Shealy, David L.

    2000-10-01

    Geometrical optics - the laws of reflection and refraction, ray tracing, conservation of energy within a bundle of rays, and the condition of constant optical path length - provides a foundation for design of laser beam shaping systems. This paper explores the use of machine learning techniques, concentrating on genetic algorithms, to design laser beam shaping systems using geometrical optics. Specifically, a three-element GRIN laser beam shaping system has been designed to expand and transform a Gaussian input beam profile into one with a uniform irradiance profile. Solution to this problem involves the constrained optimization of a merit function involving a mix of discrete and continuous parameters. The merit function involves terms that measure the deviation of the output beam diameter, divergence, and irradiance from target values. The continuous parameters include the distances between the lens elements, the thickness, and radii of the lens elements. The discrete parameters include the GRIN glass types from a manufacturer's database, the gradient direction of the GRIN elements (positive or negative), and the actual number of lens elements in the system (one to four).

  12. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell has been designed and tested to deliver high capacity at a C/1.5 discharge rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet made at a discharge rate this high in the 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters, performance, and future test plans are described.

  13. Space tug thermal control. [design criteria and specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    It was determined that space tug will require the capability to perform its mission within a broad range of thermal environments with currently planned mission durations of up to seven days, so an investigation was conducted to define a thermal design for the forward and intertank compartments and fuel cell heat rejection system that satisfies tug requirements for low inclination geosynchronous deploy and retrieve missions. Passive concepts were demonstrated analytically for both the forward and intertank compartments, and a worst case external heating environment was determined for use during the study. The thermal control system specifications and designs which resulted from the research are shown.

  14. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Irwin, Ryan W.; Tinker, Michael L.

    2005-02-01

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  15. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    SciTech Connect

    Irwin, Ryan W.; Tinker, Michael L.

    2005-02-06

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  16. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Irwin, Ryan W.; Tinker, Michael L.

    2005-01-01

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  17. Overlay measurement accuracy enhancement by design and algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Lee, Byongseog; Han, Sangjun; Kim, Myoungsoo; Kwon, Wontaik; Park, Sungki; Choi, DongSub; Lee, Dohwa; Jeon, Sanghuck; Lee, Kangsan; Itzkovich, Tal; Amir, Nuriel; Volkovich, Roie; Herzel, Eitan; Wagner, Mark; El Kodadi, Mohamed

    2015-03-01

    Advanced design nodes require more complex lithography techniques, such as double patterning, as well as advanced materials like hard masks. This poses new challenge for overlay metrology and process control. In this publication several step are taken to face these challenges. Accurate overlay metrology solutions are demonstrated for advanced memory devices.

  18. Computational model design specification for Phase 1 of the Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.

    1991-07-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emission from nuclear operations at Hanford since their inception in 1944. The purpose of this report is to outline the basic algorithm and necessary computer calculations to be used to calculate radiation doses specific and hypothetical individuals in the vicinity of Hanford. The system design requirements, those things that must be accomplished, are defined. The system design specifications, the techniques by which those requirements are met, are outlined. Included are the basic equations, logic diagrams, and preliminary definition of the nature of each input distribution. 4 refs., 10 figs., 9 tabs.

  19. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  20. Dye laser amplifier including a specifically designed diffuser assembly

    DOEpatents

    Davin, James; Johnston, James P.

    1992-01-01

    A large (high flow rate) dye laser amplifier in which a continuous replened supply of dye is excited by a first light beam, specifically a copper vapor laser beam, in order to amplify the intensity of a second different light beam, specifically a dye beam, passing through the dye is disclosed herein. This amplifier includes a dye cell defining a dye chamber through which a continuous stream of dye is caused to pass at a relatively high flow rate and a specifically designed diffuser assembly for slowing down the flow of dye while, at the same time, assuring that as the dye stream flows through the diffuser assembly it does so in a stable manner.

  1. On Polymorphic Circuits and Their Design Using Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.

  2. A Computer Environment for Beginners' Learning of Sorting Algorithms: Design and Pilot Evaluation

    ERIC Educational Resources Information Center

    Kordaki, M.; Miatidis, M.; Kapsampelis, G.

    2008-01-01

    This paper presents the design, features and pilot evaluation study of a web-based environment--the SORTING environment--for the learning of sorting algorithms by secondary level education students. The design of this environment is based on modeling methodology, taking into account modern constructivist and social theories of learning while at…

  3. Sensitivity of snow density and specific surface area measured by microtomography to different image processing algorithms

    NASA Astrophysics Data System (ADS)

    Hagenmuller, Pascal; Matzl, Margret; Chambon, Guillaume; Schneebeli, Martin

    2016-05-01

    Microtomography can measure the X-ray attenuation coefficient in a 3-D volume of snow with a spatial resolution of a few microns. In order to extract quantitative characteristics of the microstructure, such as the specific surface area (SSA), from these data, the greyscale image first needs to be segmented into a binary image of ice and air. Different numerical algorithms can then be used to compute the surface area of the binary image. In this paper, we report on the effect of commonly used segmentation and surface area computation techniques on the evaluation of density and specific surface area. The evaluation is based on a set of 38 X-ray tomographies of different snow samples without impregnation, scanned with an effective voxel size of 10 and 18 μm. We found that different surface area computation methods can induce relative variations up to 5 % in the density and SSA values. Regarding segmentation, similar results were obtained by sequential and energy-based approaches, provided the associated parameters were correctly chosen. The voxel size also appears to affect the values of density and SSA, but because images with the higher resolution also show the higher noise level, it was not possible to draw a definitive conclusion on this effect of resolution.

  4. Algorithm Design on Network Game of Chinese Chess

    NASA Astrophysics Data System (ADS)

    Xianmei, Fang

    This paper describes the current situation of domestic network game. Contact the present condition of the local network game currently, we inquired to face to a multithread tcp client and server, such as Chinese chess, according to the information, and study the contents and meanings. Combining the Java of basic knowledge, the article study the compiling procedure facing to the object according to the information in Java Swing usage, and the method of the network procedure. The article researched the method and processes of the network procedure carry on the use of Sprocket under the Java Swing. Understood the basic process of compiling procedure using Java and how to compile a network procedure. The most importance is how a pair of machines correspondence-C/S the service system-is carried out. From here, we put forward the data structure,the basic calculate way of the network game- Chinese chess, and how to design and realize the server and client of that procedure. The online games -- chess design can be divided into several modules as follows: server module, client module and the control module.

  5. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  6. A novel method to design S-box based on chaotic map and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Wong, Kwok-Wo; Li, Changbing; Li, Yang

    2012-01-01

    The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes.

  7. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm

    PubMed Central

    Chang, Wei-Der

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168

  8. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  9. Field Programmable Gate Array Based Parallel Strapdown Algorithm Design for Strapdown Inertial Navigation Systems

    PubMed Central

    Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua

    2011-01-01

    A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058

  10. Field programmable gate array based parallel strapdown algorithm design for strapdown inertial navigation systems.

    PubMed

    Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua

    2011-01-01

    A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058

  11. A Annealing Algorithm for Designing Ligands from Receptor Structures.

    NASA Astrophysics Data System (ADS)

    Zielinski, Peter J.

    DEenspace NOVO, a simulated annealing method for designing ligands is described. At a given temperature, ligand fragments are randomly selected and randomly placed within the given receptor cavity, often replacing existing ligand fragments. For each new ligand fragment combination, bonded, nonbonded, polarization and solvation energies of the new ligand-receptor system are compared to the previous system. Acceptance or rejection of the new system is decided using the Boltzmann distribution. Thus, energetically unfavorable fragment switches are sometimes accepted, sacrificing immediate energy gains in the interest of finding the system with the globally minimum energy. By lowering the temperature, the rate of unfavorable switches decreases and energetically favorable combinations become difficult to change. The process is halted when the frequency of switches becomes too small. As a test of the method, DEenspace NOVO predicted the positions of important ligand fragments for neuraminidase that are in accord with the natural ligand, sialic acid.

  12. Evolutionary algorithm for the neutrino factory front end design

    SciTech Connect

    Poklonskiy, Alexey A.; Neuffer, David; /Fermilab

    2009-01-01

    The Neutrino Factory is an important tool in the long-term neutrino physics program. Substantial effort is put internationally into designing this facility in order to achieve desired performance within the allotted budget. This accelerator is a secondary beam machine: neutrinos are produced by means of the decay of muons. Muons, in turn, are produced by the decay of pions, produced by hitting the target by a beam of accelerated protons suitable for acceleration. Due to the physics of this process, extra conditioning of the pion beam coming from the target is needed in order to effectively perform subsequent acceleration. The subsystem of the Neutrino Factory that performs this conditioning is called Front End, its main performance characteristic is the number of the produced muons.

  13. Optimization of thin noise barrier designs using Evolutionary Algorithms and a Dual BEM Formulation

    NASA Astrophysics Data System (ADS)

    Toledo, R.; Aznárez, J. J.; Maeso, O.; Greiner, D.

    2015-01-01

    This work aims at assessing the acoustic efficiency of different thin noise barrier models. These designs frequently feature complex profiles and their implementation in shape optimization processes may not always be easy in terms of determining their topological feasibility. A methodology to conduct both overall shape and top edge optimizations of thin cross section acoustic barriers by idealizing them as profiles with null boundary thickness is proposed. This procedure is based on the maximization of the insertion loss of candidate profiles proposed by an evolutionary algorithm. The special nature of these sorts of barriers makes necessary the implementation of a complementary formulation to the classical Boundary Element Method (BEM). Numerical simulations of the barriers' performance are conducted by using a 2D Dual BEM code in eight different barrier configurations (covering overall shaped and top edge configurations; spline curved and polynomial shaped based designs; rigid and noise absorbing boundaries materials). While results are achieved by using a specific receivers' scheme, the influence of the receivers' location on the acoustic performance is previously addressed. With the purpose of testing the methodology here presented, a numerical model validation on the basis of experimental results from a scale model test [34] is conducted. Results obtained show the usefulness of representing complex thin barrier configurations as null boundary thickness-like models.

  14. The GLAS Science Algorithm Software (GSAS) Detailed Design Document Version 6. Volume 16

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document describes the detailed design of GLAS Science Algorithm Software (GSAS). The GSAS is used to create the ICESat GLAS standard data products. The National Snow and Ice Data Center (NSDIC) distribute these products. The document contains descriptions, flow charts, data flow diagrams, and structure charts for each major component of the GSAS. The purpose of this document is to present the detailed design of the GSAS. It is intended as a reference source to assist the maintenance programmer in making changes that fix or enhance the documented software.

  15. The Design of Flux-Corrected Transport (FCT) Algorithms for Structured Grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. This chapter confines itself to the design of FCT algorithms for structured grids, using a finite volume formalism, for this is the area with which the present author is most familiar. The reader will find excellent material on the design of FCT algorithms for unstructured grids, using both finite volume and finite element formalisms, in the chapters by Professors Löhner, Baum, Kuzmin, Turek, and Möller in the present volume.

  16. RNAiFOLD: a constraint programming algorithm for RNA inverse folding and molecular design.

    PubMed

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-04-01

    Synthetic biology is a rapidly emerging discipline with long-term ramifications that range from single-molecule detection within cells to the creation of synthetic genomes and novel life forms. Truly phenomenal results have been obtained by pioneering groups--for instance, the combinatorial synthesis of genetic networks, genome synthesis using BioBricks, and hybridization chain reaction (HCR), in which stable DNA monomers assemble only upon exposure to a target DNA fragment, biomolecular self-assembly pathways, etc. Such work strongly suggests that nanotechnology and synthetic biology together seem poised to constitute the most transformative development of the 21st century. In this paper, we present a Constraint Programming (CP) approach to solve the RNA inverse folding problem. Given a target RNA secondary structure, we determine an RNA sequence which folds into the target structure; i.e. whose minimum free energy structure is the target structure. Our approach represents a step forward in RNA design--we produce the first complete RNA inverse folding approach which allows for the specification of a wide range of design constraints. We also introduce a Large Neighborhood Search approach which allows us to tackle larger instances at the cost of losing completeness, while retaining the advantages of meeting design constraints (motif, GC-content, etc.). Results demonstrate that our software, RNAiFold, performs as well or better than all state-of-the-art approaches; nevertheless, our approach is unique in terms of completeness, flexibility, and the support of various design constraints. The algorithms presented in this paper are publicly available via the interactive webserver http://bioinformatics.bc.edu/clotelab/RNAiFold; additionally, the source code can be downloaded from that site. PMID:23600819

  17. Epitope prediction algorithms for peptide-based vaccine design.

    PubMed

    Florea, Liliana; Halldórsson, Bjarni; Kohlbacher, Oliver; Schwartz, Russell; Hoffman, Stephen; Istrail, Sorin

    2003-01-01

    Peptide-based vaccines, in which small peptides derived from target proteins (eptiopes) are used to provoke an immune reaction, have attracted considerable attention recently as a potential means both of treating infectious diseases and promoting the destruction of cancerous cells by a patient's own immune system. With the availability of large sequence databases and computers fast enough for rapid processing of large numbers of peptides, computer aided design of peptide-based vaccines has emerged as a promising approach to screening among billions of possible immune-active peptides to find those likely to provoke an immune response to a particular cell type. In this paper, we describe the development of three novel classes of methods for the prediction problem. We present a quadratic programming approach that can be trained on quantitative as well as qualitative data. The second method uses linear programming to counteract the fact that our training data contains mostly positive examples. The third class of methods uses sequence profiles obtained by clustering known epitopes to score candidate peptides. By integrating these methods, using a simple voting heuristic, we achieve improved accuracy over the state of the art. PMID:16826643

  18. Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft

    PubMed Central

    Ning, Xin; Yuan, Jianping; Yue, Xiaokui

    2016-01-01

    A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755

  19. Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft.

    PubMed

    Ning, Xin; Yuan, Jianping; Yue, Xiaokui

    2016-01-01

    A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755

  20. Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft

    NASA Astrophysics Data System (ADS)

    Ning, Xin; Yuan, Jianping; Yue, Xiaokui

    2016-03-01

    A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions.

  1. Design of a blade stiffened composite panel by a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Nagendra, S.; Haftka, R. T.; Gurdal, Z.

    1993-01-01

    Genetic algorithms (GAs) readily handle discrete problems, and can be made to generate many optima, as is presently illustrated for the case of design for minimum-weight stiffened panels with buckling constraints. The GA discrete design procedure proved superior to extant alternatives for both stiffened panels with cutouts and without cutouts. High computational costs are, however, associated with this discrete design approach at the current level of its development.

  2. Design of a blade stiffened composite panel by a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Nagendra, S.; Haftka, R. T.; Gurdal, Z.

    1993-04-01

    Genetic algorithms (GAs) readily handle discrete problems, and can be made to generate many optima, as is presently illustrated for the case of design for minimum-weight stiffened panels with buckling constraints. The GA discrete design procedure proved superior to extant alternatives for both stiffened panels with cutouts and without cutouts. High computational costs are, however, associated with this discrete design approach at the current level of its development.

  3. Rational Design of Antirheumatic Prodrugs Specific for Sites of Inflammation

    PubMed Central

    Onuoha, Shimobi C.; Ferrari, Mathieu; Sblattero, Daniele

    2015-01-01

    Objective Biologic drugs, such as the anti–tumor necrosis factor (anti‐TNF) antibody adalimumab, have represented a breakthrough in the treatment of rheumatoid arthritis. Yet, concerns remain over their lack of efficacy in a sizable proportion of patients and their potential for systemic side effects such as infection. Improved biologic prodrugs specifically targeted to the site of inflammation have the potential to alleviate current concerns surrounding biologic anticytokine therapies. The purpose of this study was to design, construct, and evaluate in vitro and ex vivo the targeting and antiinflammatory capacity of activatable bispecific antibodies. Methods Activatable dual variable domain (aDVD) antibodies were designed and constructed to target intercellular adhesion molecule 1 (ICAM‐1), which is up‐regulated at sites of inflammation, and anti‐TNF antibodies (adalimumab and infliximab). These bispecific molecules included an external arm that targets ICAM‐1 and an internal arm that comprises the therapeutic domain of an anti‐TNF antibody. Both arms were linked to matrix metalloproteinase (MMP)–cleavable linkers. The constructs were tested for their ability to bind and neutralize both in vitro and ex vivo targets. Results Intact aDVD constructs demonstrated significantly reduced binding and anti‐TNF activity in the prodrug formulation as compared to the parent antibodies. Human synovial fluid and physiologic concentrations of MMP enzyme were capable of cleaving the external domain of the antibody, revealing a fully active molecule. Activated antibodies retained the same binding and anti‐TNF inhibitory capacities as the parent molecules. Conclusion The design of a biologic prodrug with enhanced specificity for sites of inflammation (synovium) and reduced specificity for off‐target TNF is described. This construct has the potential to form a platform technology that is capable of enhancing the therapeutic index of drugs for the treatment of

  4. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell was designed and tested to deliver high capacity at steady discharge rates up to and including a C rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet of any type in a 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters and performance are described. Also covered is an episode of capacity fading due to electrode swelling and its successful recovery by means of additional activation procedures.

  5. Advanced Wet Tantalum Capacitors: Design, Specifications and Performance

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2016-01-01

    Insertion of new types of commercial, high volumetric efficiency wet tantalum capacitors in space systems requires reassessment of the existing quality assurance approaches that have been developed for capacitors manufactured to MIL-PRF-39006 requirements. The specifics of wet electrolytic capacitors is that leakage currents flowing through electrolyte can cause gas generation resulting in building up of internal gas pressure and rupture of the case. The risk associated with excessive leakage currents and increased pressure is greater for high value advanced wet tantalum capacitors, but it has not been properly evaluated yet. This presentation gives a review of specifics of the design, performance, and potential reliability risks associated with advanced wet tantalum capacitors. Problems related to setting adequate requirements for DPA, leakage currents, hermeticity, stability at low and high temperatures, ripple currents for parts operating in vacuum, and random vibration testing are discussed. Recommendations for screening and qualification to reduce risks of failures have been suggested.

  6. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  7. Modular Integrated Stackable Layers (MISL) 1.1 Design Specification. Design Guideline Document

    NASA Technical Reports Server (NTRS)

    Yim, Hester J.

    2012-01-01

    This document establishes the design guideline of the Modular Instrumentation Data Acquisition (MI-DAQ) system in utilization of several designs available in EV. The MI- DAQ provides the options to the customers depending on their system requirements i.e. a 28V interface power supply, a low power battery operated system, a low power microcontroller, a higher performance microcontroller, a USB interface, a Ethernet interface, a wireless communication, various sensor interfaces, etc. Depending on customer's requirements, the each functional board can be stacked up from a bottom level of power supply to a higher level of stack to provide user interfaces. The stack up of boards are accomplished by a predefined and standardized power bus and data bus connections which are included in this document along with other physical and electrical guidelines. This guideline also provides information for a new design options. This specification is the product of a collaboration between NASA/JSC/EV and Texas A&M University. The goal of the collaboration is to open source the specification and allow outside entities to design, build, and market modules that are compatible with the specification. NASA has designed and is using numerous modules that are compatible to this specification. A limited number of these modules will also be released as open source designs to support the collaboration. The released designs are listed in the Applicable Documents.

  8. Nuclear Electric Vehicle Optimization Toolset (NEVOT): Integrated System Design Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg

    2003-01-01

    The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.

  9. Design and Implementation of an On-Chip Patient-Specific Closed-Loop Seizure Onset and Termination Detection System.

    PubMed

    Zhang, Chen; Bin Altaf, Muhammad Awais; Yoo, Jerald

    2016-07-01

    This paper presents the design of an area- and energy-efficient closed-loop machine learning-based patient-specific seizure onset and termination detection algorithm, and its on-chip hardware implementation. Application- and scenario-based tradeoffs are compared and reviewed for seizure detection and suppression algorithm and system which comprises electroencephalography (EEG) data acquisition, feature extraction, classification, and stimulation. Support vector machine achieves a good tradeoff among power, area, patient specificity, latency, and classification accuracy for long-term monitoring of patients with limited training seizure patterns. Design challenges of EEG data acquisition on a multichannel wearable environment for a patch-type sensor are also discussed in detail. Dual-detector architecture incorporates two area-efficient linear support vector machine classifiers along with a weight-and-average algorithm to target high sensitivity and good specificity at once. On-chip implementation issues for a patient-specific transcranial electrical stimulation are also discussed. The system design is verified using CHB-MIT EEG database [1] with a comprehensive measurement criteria which achieves high sensitivity and specificity of 95.1% and 96.2%, respectively, with a small latency of 1 s. It also achieves seizure onset and termination detection delay of 2.98 and 3.82 s, respectively, with seizure length estimation error of 4.07 s. PMID:27093712

  10. The study on gear transmission multi-objective optimum design based on SQP algorithm

    NASA Astrophysics Data System (ADS)

    Li, Quancai; Qiao, Xuetao; Wu, Cuirong; Wang, Xingxing

    2011-12-01

    Gear mechanism is the most popular transmission mechanism; however, the traditional design method is complex and not accurate. Optimization design is the effective method to solve the above problems, used in gear design method. In many of the optimization software MATLAB, there are obvious advantage projects and numerical calculation. There is a single gear transmission as example, the mathematical model of gear transmission system, based on the analysis of the objective function, and on the basis of design variables and confirmation of choice restrictive conditions. The results show that the algorithm through MATLAB, the optimization designs, efficient, reliable, simple.

  11. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  12. The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.

    PubMed

    Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente

    2015-01-01

    Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412

  13. The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic

    PubMed Central

    Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández

    2015-01-01

    Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412

  14. The Design of Flux-Corrected Transport (FCT) Algorithms For Structured Grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    A given flux-corrected transport (FCT) algorithm consists of three components: 1) a high order algorithm to which it reduces in smooth parts of the flow; 2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and 3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy.

  15. Validation of space/ground antenna control algorithms using a computer-aided design tool

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1995-01-01

    The validation of the algorithms for controlling the space-to-ground antenna subsystem for Space Station Alpha is an important step in assuring reliable communications. These algorithms have been developed and tested using a simulation environment based on a computer-aided design tool that can provide a time-based execution framework with variable environmental parameters. Our work this summer has involved the exploration of this environment and the documentation of the procedures used to validate these algorithms. We have installed a variety of tools in a laboratory of the Tracking and Communications division for reproducing the simulation experiments carried out on these algorithms to verify that they do meet their requirements for controlling the antenna systems. In this report, we describe the processes used in these simulations and our work in validating the tests used.

  16. The design and results of an algorithm for intelligent ground vehicles

    NASA Astrophysics Data System (ADS)

    Duncan, Matthew; Milam, Justin; Tote, Caleb; Riggins, Robert N.

    2010-01-01

    This paper addresses the design, design method, test platform, and test results of an algorithm used in autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its 2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this autonomous robotic system, beginning with the design process and ending with test results for both simulation and real environments.

  17. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.

    PubMed

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  18. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures

    PubMed Central

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  19. Cloning a neutral protease of Clostridium histolyticum, determining its substrate specificity, and designing a specific substrate.

    PubMed

    Maeda, Hiroshi; Nakagawa, Kanako; Murayama, Kazutaka; Goto, Masafumi; Watanabe, Kimiko; Takeuchi, Michio; Yamagata, Youhei

    2015-12-01

    Islet transplantation is a prospective treatment for restoring normoglycemia in patients with type 1 diabetes. Islet isolation from pancreases by decomposition with proteolytic enzymes is necessary for transplantation. Two collagenases, collagenase class I (ColG) and collagenase class II (ColH), from Clostridium histolyticum have been used for islet isolation. Neutral proteases have been added to the collagenases for human islet isolation. A neutral protease from C. histolyticum (NP) and thermolysin from Bacillus thermoproteolyicus has been used for the purpose. Thermolysin is an extensively studied enzyme, but NP is not well known. We therefore cloned the gene encoding NP and constructed a Bacillus subtilis overexpression strain. The expressed enzyme was purified, and its substrate specificity was examined. We observed that the substrate specificity of NP was higher than that of thermolysin, and that the protein digestion activities of NP, as determined by colorimetric methods, were lower than those of thermolysin. It seems that decomposition using NP does not negatively affect islets during islet preparation from pancreases. Furthermore, we designed a novel substrate that allows the measurement of NP activity specifically in the enzyme mixture for islet preparation and the culture broth of C. histolyticum. The activity of NP can also be monitored during islet isolation. We hope the purified enzyme and this specific substrate contribute to the optimization of islet isolation from pancreases and that it leads to the success of islet transplantation and the improvement of the quality of life (QOL) for diabetic patients. PMID:26307443

  20. A drug-specific nanocarrier design for efficient anticancer therapy

    NASA Astrophysics Data System (ADS)

    Shi, Changying; Guo, Dandan; Xiao, Kai; Wang, Xu; Wang, Lili; Luo, Juntao

    2015-07-01

    The drug-loading properties of nanocarriers depend on the chemical structures and properties of their building blocks. Here we customize telodendrimers (linear dendritic copolymer) to design a nanocarrier with improved in vivo drug delivery characteristics. We do a virtual screen of a library of small molecules to identify the optimal building blocks for precise telodendrimer synthesis using peptide chemistry. With rationally designed telodendrimer architectures, we then optimize the drug-binding affinity of a nanocarrier by introducing an optimal drug-binding molecule (DBM) without sacrificing the stability of the nanocarrier. To validate the computational predictions, we synthesize a series of nanocarriers and evaluate systematically for doxorubicin delivery. Rhein-containing nanocarriers have sustained drug release, prolonged circulation, increased tolerated dose, reduced toxicity, effective tumour targeting and superior anticancer effects owing to favourable doxorubicin-binding affinity and improved nanoparticle stability. This study demonstrates the feasibility and versatility of the de novo design of telodendrimer nanocarriers for specific drug molecules, which is a promising approach to transform nanocarrier development for drug delivery.

  1. A drug-specific nanocarrier design for efficient anticancer therapy

    PubMed Central

    Shi, Changying; Guo, Dandan; Xiao, Kai; Wang, Xu; Wang, Lili; Luo, Juntao

    2015-01-01

    The drug-loading properties of nanocarriers depend on the chemical structures and properties of their building blocks. Here, we customize telodendrimers (linear-dendritic copolymer) to design a nanocarrier with improved in vivo drug delivery characteristics. We do a virtual screen of a library of small molecules to identify the optimal building blocks for precise telodendrimer synthesis using peptide chemistry. With rationally designed telodendrimer architectures, we then optimize the drug binding affinity of a nanocarrier by introducing an optimal drug-binding molecule (DBM) without sacrificing the stability of the nanocarrier. To validate the computational predictions, we synthesize a series of nanocarriers and evaluate systematically for doxorubicin delivery. Rhein-containing nanocarriers have sustained drug release, prolonged circulation, increased tolerated dose, reduced toxicity, effective tumor targeting and superior anticancer effects owing to favourable doxorubicin-binding affinity and improved nanoparticle stability. This study demonstrates the feasibility and versatility of the de novo design of telodendrimer nanocarriers for specific drug molecules, which is a promising approach to transform nanocarrier development for drug delivery. PMID:26158623

  2. General parameter relations for the Shinnar-Le Roux pulse design algorithm.

    PubMed

    Lee, Kuan J

    2007-06-01

    The magnetization ripple amplitudes from a pulse designed by the Shinnar-Le Roux algorithm are a non-linear function of the Shinnar-Le Roux A and B polynomial ripples. In this paper, the method of Pauly et al. [J. Pauly, P. Le Roux, D. Nishimura, A. Macovski, Parameter relations for the Shinnar-Le Roux selective excitation pulse design algorithm, IEEE Transactions on Medical Imaging 10 (1991) 56-65.] has been extended to derive more general parameter relations. These relations can be used for cases outside the five classes considered by Pauly et al., in particular excitation pulses for flip angles that are not small or 90 degrees. Use of the new relations, together with an iterative procedure to obtain polynomials with the specified ripples from the Parks-McClellan algorithm, are shown to give simulated slice profiles that have the desired ripple amplitudes. PMID:17408999

  3. A Dynamic Programming Algorithm for Optimal Design of Tidal Power Plants

    NASA Astrophysics Data System (ADS)

    Nag, B.

    2013-03-01

    A dynamic programming algorithm is proposed and demonstrated on a test case to determine the optimum operating schedule of a barrage tidal power plant to maximize the energy generation over a tidal cycle. Since consecutive sets of high and low tides can be predicted accurately for any tidal power plant site, this algorithm can be used to calculate the annual energy generation for different technical configurations of the plant. Thus an optimal choice of a tidal power plant design can be made from amongst different design configurations yielding the least cost of energy generation. Since this algorithm determines the optimal time of operation of sluice gate opening and turbine gates opening to maximize energy generation over a tidal cycle, it can also be used to obtain the annual schedule of operation of a tidal power plant and the minute-to-minute energy generation, for dissemination amongst power distribution utilities.

  4. Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven

    2010-05-01

    Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.

  5. Genetic algorithms in conceptual design of a light-weight, low-noise, tilt-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Wells, Valana L.

    1996-01-01

    This report outlines research accomplishments in the area of using genetic algorithms (GA) for the design and optimization of rotorcraft. It discusses the genetic algorithm as a search and optimization tool, outlines a procedure for using the GA in the conceptual design of helicopters, and applies the GA method to the acoustic design of rotors.

  6. Development of the Standardized DOE Spent Nuclear Fuel Canister Design and Preliminary Design Specification

    SciTech Connect

    A. G. Ware; D. K. Morton; N. L. Smith; S. D. Snow; T. E. Rahl

    1999-08-01

    The Department of Energy (DOE) has developed a set of standard canisters for the handling, interim storage, transportation, and disposal in the national repository of DOE spent nuclear fuel (SNF). The Department's National Spent Nuclear Fuel Program (NSNFP) and the Office of Civilian Radioactive Waste Management (OCRWM) worked together along with DOE sites to develop the canister design. The standardized DOE SNF in a variety of potential storage and transportation systems and also be acceptable to the repository, based on current and anticipated future requirements. Since specific design details regarding storage, transportation, and repository dispoal of DOE SNF are not yet finalized, the NSNFP recognized that it was necessary to specify a complete DOE SNF canister design. This design had to be flexible enough to be incorporated into various storage and transportation systems and yet standardized so that the canister would be acceptable to the repository for disposal. This paper discusses the efforts taken to gain DOE complex consensus, the reasons for various design decisions, the steps taken to demonstrate the robustness of the proposed canister design, and other insights associated with the development of the standardized DOE SNF canister design and the preliminary design specification.

  7. Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data

    PubMed Central

    2012-01-01

    Background This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor. PMID:23171000

  8. Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.

    2015-07-01

    The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.

  9. A new adaptive merging and growing algorithm for designing artificial neural networks.

    PubMed

    Islam, Md Monirul; Sattar, Md Abdus; Amin, Md Faijul; Yao, Xin; Murase, Kazuyuki

    2009-06-01

    This paper presents a new algorithm, called adaptive merging and growing algorithm (AMGA), in designing artificial neural networks (ANNs). This algorithm merges and adds hidden neurons during the training process of ANNs. The merge operation introduced in AMGA is a kind of a mixed mode operation, which is equivalent to pruning two neurons and adding one neuron. Unlike most previous studies, AMGA puts emphasis on autonomous functioning in the design process of ANNs. This is the main reason why AMGA uses an adaptive not a predefined fixed strategy in designing ANNs. The adaptive strategy merges or adds hidden neurons based on the learning ability of hidden neurons or the training progress of ANNs. In order to reduce the amount of retraining after modifying ANN architectures, AMGA prunes hidden neurons by merging correlated hidden neurons and adds hidden neurons by splitting existing hidden neurons. The proposed AMGA has been tested on a number of benchmark problems in machine learning and ANNs, including breast cancer, Australian credit card assessment, and diabetes, gene, glass, heart, iris, and thyroid problems. The experimental results show that AMGA can design compact ANN architectures with good generalization ability compared to other algorithms. PMID:19203888

  10. The Impact of Critical Thinking and Logico-Mathematical Intelligence on Algorithmic Design Skills

    ERIC Educational Resources Information Center

    Korkmaz, Ozgen

    2012-01-01

    The present study aims to reveal the impact of students' critical thinking and logico-mathematical intelligence levels of students on their algorithm design skills. This research was a descriptive study and carried out by survey methods. The sample consisted of 45 first-year educational faculty undergraduate students. The data was collected by…

  11. A homotopy algorithm for synthesizing robust controllers for flexible structures via the maximum entropy design equations

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen

    1990-01-01

    One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.

  12. Multidisciplinary Design, Analysis, and Optimization Tool Development using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2008-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space A dministration Dryden Flight Research Center to automate analysis and design process by leveraging existing tools such as NASTRAN, ZAERO a nd CFD codes to enable true multidisciplinary optimization in the pr eliminary design stage of subsonic, transonic, supersonic, and hypers onic aircraft. This is a promising technology, but faces many challe nges in large-scale, real-world application. This paper describes cur rent approaches, recent results, and challenges for MDAO as demonstr ated by our experience with the Ikhana fire pod design.

  13. Stochastic sensors designed for assessment of biomarkers specific to obesity.

    PubMed

    Cioates Negut, Catalina; Stefan-van Staden, Raluca-Ioana; Ungureanu, Eleonora-Mihaela; Udeanu, Denisa Ioana

    2016-09-01

    Two stochastic sensors based on the following oleamides: 1-adamantyloleamide and N,N-dimethyl-N-(2-oleylamidoethyl)amine physically immobilized on graphite paste were designed. The sensors were able to determine simultaneously from the whole blood of Wistar rats three biomarkers specific to obesity: leptin, interleukin-6 (IL-6) and plasminogen activator inhibitor 1 (PAI-1). The whole blood samples were obtained from Wistar rats treated with oleoylethanolamide (OEA), (Z)-N-[(1S)-2-hidroxy-1-(phenylmethyl) ethyl]-9octadecenamide (OLA), and with the aqueous solution of 1% Tween 80 used as solvent for oleamides formulations (control samples). The proposed sensors were very sensitive and reliable for the assay of obesity biomarkers in whole blood of rats. PMID:27288757

  14. An overview of field-specific designs of microbial EOR

    SciTech Connect

    Robertson, E.P.; Bala, G.A.; Fox, S.L.; Jackson, J.D.; Thomas, C.P.

    1995-12-31

    The selection and design of an MEOR process for application in a specific field involves geological, reservoir, and biological characterization. Microbially mediated oil recovery mechanisms (bigenic gas, biopolymers, and biosurfactants) are defined by the types of microorganisms used. The engineering and biological character of a given reservoir must be understood to correctly select a microbial system to enhance oil recovery. This paper discusses the methods used to evaluate three fields with distinct characteristics and production problems for the applicability of MEOR would not be applicable in two of the three fields considered. The development of a microbial oil recovery process for the third field appeared promising. Development of a bacterial consortium capable of producing the desired metabolites was initiated, and field isolates were characterized.

  15. Novel Designs for Application Specific MEMS Pressure Sensors

    PubMed Central

    Fragiacomo, Giulio; Reck, Kasper; Lorenzen, Lasse; Thomsen, Erik V.

    2010-01-01

    In the framework of developing innovative microfabricated pressure sensors, we present here three designs based on different readout principles, each one tailored for a specific application. A touch mode capacitive pressure sensor with high sensitivity (14 pF/bar), low temperature dependence and high capacitive output signal (more than 100 pF) is depicted. An optical pressure sensor intrinsically immune to electromagnetic interference, with large pressure range (0–350 bar) and a sensitivity of 1 pm/bar is presented. Finally, a resonating wireless pressure sensor power source free with a sensitivity of 650 KHz/mmHg is described. These sensors will be related with their applications in harsh environment, distributed systems and medical environment, respectively. For many aspects, commercially available sensors, which in vast majority are piezoresistive, are not suited for the applications proposed. PMID:22163425

  16. An overview of field specific designs of microbial EOR

    SciTech Connect

    Robertson, E.P.; Bala, G.A.; Fox, S.L.; Jackson, J.D.; Thomas, C.P.

    1995-12-01

    The selection and design of a microbial enhanced oil recovery (MEOR) process for application in a specific field involves geological, reservoir, and biological characterization. Microbially mediated oil recovery mechanisms (biogenic gas, biopolymers, and biosurfactants) are defined by the types of microorganisms used. The engineering and biological character of a given reservoir must be understood to correctly select a microbial system to enhance oil recovery. The objective of this paper is to discuss the methods used to evaluate three fields with distinct characteristics and production problems for the applicability of MEOR technology. Reservoir characteristics and laboratory results indicated that MEOR would not be applicable in two of the three fields considered. The development of a microbial oil recovery process for the third field appeared promising. Development of a bacterial consortium capable of producing the desired metabolites was initiated and field isolates were characterized.

  17. DITDOS: A set of design specifications for distributed data inventories

    NASA Technical Reports Server (NTRS)

    King, T. A.; Walker, R. J.; Joy, S. P.

    1995-01-01

    The analysis of space science data often requires researchers to work with many different types of data. For instance, correlative analysis can require data from multiple instruments on a single spacecraft, multiple spacecraft, and ground-based data. Typically, data from each source are available in a different format and have been written on a different type of computer, and so much effort must be spent to read the data and convert it to the computer and format that the researchers use in their analysis. The large and ever-growing amount of data and the large investment by the scientific community in software that require a specific data format make using standard data formats impractical. A format-independent approach to accessing and analyzing disparate data is key to being able to deliver data to a diverse community in a timely fashion. The system in use at the Planetary Plasma Interactions (PPI) node of the NASA Planetary Data System (PDS) is based on the object-oriented Distributed Inventory Tracking and Data Ordering Specification (DITDOS), which describes data inventories in a storage independent way. The specifications have been designed to make it possible to build DITDOS compliant inventories that can exist on portable media such as CD-ROM's. The portable media can be moved within a system, or from system to system, and still be used without modification. Several applications have been developed to work with DITDOS compliant data holdings. One is a windows-based client/server application, which helps guide the user in the selection of data. A user can select a data base, then a data set, then a specific data file, and then either order the data and receive it immediately if it is online or request that it be brought online if it is not. A user can also view data by any of the supported methods. DITDOS makes it possible to use already existing applications for data-specific actions, and this is done whenever possible. Another application is a stand

  18. The design of a parallel adaptive paving all-quadrilateral meshing algorithm

    SciTech Connect

    Tautges, T.J.; Lober, R.R.; Vaughan, C.

    1995-08-01

    Adaptive finite element analysis demands a great deal of computational resources, and as such is most appropriately solved in a massively parallel computer environment. This analysis will require other parallel algorithms before it can fully utilize MP computers, one of which is parallel adaptive meshing. A version of the paving algorithm is being designed which operates in parallel but which also retains the robustness and other desirable features present in the serial algorithm. Adaptive paving in a production mode is demonstrated using a Babuska-Rheinboldt error estimator on a classic linearly elastic plate problem. The design of the parallel paving algorithm is described, and is based on the decomposition of a surface into {open_quotes}virtual{close_quotes} surfaces. The topology of the virtual surface boundaries is defined using mesh entities (mesh nodes and edges) so as to allow movement of these boundaries with smoothing and other operations. This arrangement allows the use of the standard paving algorithm on subdomain interiors, after the negotiation of the boundary mesh.

  19. Application-specific coarse-grained reconfigurable array: architecture and design methodology

    NASA Astrophysics Data System (ADS)

    Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu

    2015-06-01

    Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.

  20. Computational design of the affinity and specificity of a therapeutic T cell receptor.

    PubMed

    Pierce, Brian G; Hellman, Lance M; Hossain, Moushumi; Singh, Nishant K; Vander Kooi, Craig W; Weng, Zhiping; Baker, Brian M

    2014-02-01

    T cell receptors (TCRs) are key to antigen-specific immunity and are increasingly being explored as therapeutics, most visibly in cancer immunotherapy. As TCRs typically possess only low-to-moderate affinity for their peptide/MHC (pMHC) ligands, there is a recognized need to develop affinity-enhanced TCR variants. Previous in vitro engineering efforts have yielded remarkable improvements in TCR affinity, yet concerns exist about the maintenance of peptide specificity and the biological impacts of ultra-high affinity. As opposed to in vitro engineering, computational design can directly address these issues, in theory permitting the rational control of peptide specificity together with relatively controlled increments in affinity. Here we explored the efficacy of computational design with the clinically relevant TCR DMF5, which recognizes nonameric and decameric epitopes from the melanoma-associated Melan-A/MART-1 protein presented by the class I MHC HLA-A2. We tested multiple mutations selected by flexible and rigid modeling protocols, assessed impacts on affinity and specificity, and utilized the data to examine and improve algorithmic performance. We identified multiple mutations that improved binding affinity, and characterized the structure, affinity, and binding kinetics of a previously reported double mutant that exhibits an impressive 400-fold affinity improvement for the decameric pMHC ligand without detectable binding to non-cognate ligands. The structure of this high affinity mutant indicated very little conformational consequences and emphasized the high fidelity of our modeling procedure. Overall, our work showcases the capability of computational design to generate TCRs with improved pMHC affinities while explicitly accounting for peptide specificity, as well as its potential for generating TCRs with customized antigen targeting capabilities. PMID:24550723

  1. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  2. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  3. Sizing of complex structure by the integration of several different optimal design algorithms

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1974-01-01

    Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.

  4. Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design

    NASA Technical Reports Server (NTRS)

    Whorton, Mark; Buschek, Harald; Calise, Anthony J.

    1996-01-01

    Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.

  5. A firefly algorithm for solving competitive location-design problem: a case study

    NASA Astrophysics Data System (ADS)

    Sadjadi, Seyed Jafar; Ashtiani, Milad Gorji; Ramezanian, Reza; Makui, Ahmad

    2016-07-01

    This paper aims at determining the optimal number of new facilities besides specifying both the optimal location and design level of them under the budget constraint in a competitive environment by a novel hybrid continuous and discrete firefly algorithm. A real-world application of locating new chain stores in the city of Tehran, Iran, is used and the results are analyzed. In addition, several examples have been solved to evaluate the efficiency of the proposed model and algorithm. The results demonstrate that the performed method provides good-quality results for the test problems.

  6. Use of the particle swarm optimization algorithm for second order design of levelling networks

    NASA Astrophysics Data System (ADS)

    Yetkin, Mevlut; Inal, Cevat; Yigit, Cemal Ozer

    2009-08-01

    The weight problem in geodetic networks can be dealt with as an optimization procedure. This classic problem of geodetic network optimization is also known as second-order design. The basic principles of geodetic network optimization are reviewed. Then the particle swarm optimization (PSO) algorithm is applied to a geodetic levelling network in order to solve the second-order design problem. PSO, which is an iterative-stochastic search algorithm in swarm intelligence, emulates the collective behaviour of bird flocking, fish schooling or bee swarming, to converge probabilistically to the global optimum. Furthermore, it is a powerful method because it is easy to implement and computationally efficient. Second-order design of a geodetic levelling network using PSO yields a practically realizable solution. It is also suitable for non-linear matrix functions that are very often encountered in geodetic network optimization. The fundamentals of the method and a numeric example are given.

  7. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  8. A general design algorithm for low optical loss adiabatic connections in waveguides.

    PubMed

    Chen, Tong; Lee, Hansuek; Li, Jiang; Vahala, Kerry J

    2012-09-24

    Single-mode waveguide designs frequently support higher order transverse modes, usually as a consequence of process limitations such as lithography. In these systems, it is important to minimize coupling to higher-order modes so that the system nonetheless behaves single mode. We propose a variational approach to design adiabatic waveguide connections with minimal intermodal coupling. An application of this algorithm in designing the "S-bend" of a whispering-gallery spiral waveguide is demonstrated with approximately 0.05 dB insertion loss. Compared to other approaches, our algorithm requires less fabrication resolution and is able to minimize the transition loss over a broadband spectrum. The method can be applied to a wide range of turns and connections and has the advantage of handling connections with arbitrary boundary conditions. PMID:23037432

  9. Design and Implementation of IIR Algorithms for Control of Longitudinal Coupled-Bunch Instabilities

    SciTech Connect

    Teytelman, Dmitry

    2000-05-16

    The recent installation of third-harmonic RF cavities at the Advanced Light Source has raised instability growth rates, and also caused tune shifts (coherent and incoherent) of more than an octave over the required range of beam currents and energies. The larger growth rates and tune shifts have rendered control by the original bandpass FIR feedback algorithms unreliable. In this paper the authors describe an implementation of an IIR feedback algorithm with more exible response tailoring. A cascade of up to 6 second-order IIR sections (12 poles and 12 zeros) was implemented in the DSPs of the longitudinal feedback system. Filter design has been formulated as an optimization problem and solved using constrained optimization methods. These IIR filters provided 2.4 times the control bandwidth as compared to the original FIR designs. Here the authors demonstrate the performance of the designed filters using transient diagnostic measurements from ALS and DAPNE.

  10. Family-Specific Degenerate Primer Design: A Tool to Design Consensus Degenerated Oligonucleotides

    PubMed Central

    Goñi, Sandra Elizabeth; Lozano, Mario Enrique

    2013-01-01

    Designing degenerate PCR primers for templates of unknown nucleotide sequence may be a very difficult task. In this paper, we present a new method to design degenerate primers, implemented in family-specific degenerate primer design (FAS-DPD) computer software, for which the starting point is a multiple alignment of related amino acids or nucleotide sequences. To assess their efficiency, four different genome collections were used, covering a wide range of genomic lengths: Arenavirus (10 × 104 nucleotides), Baculovirus (0.9 × 105 to 1.8 × 105 bp), Lactobacillus sp. (1 × 106 to 2 × 106 bp), and Pseudomonas sp. (4 × 106 to 7 × 106 bp). In each case, FAS-DPD designed primers were tested computationally to measure specificity. Designed primers for Arenavirus and Baculovirus were tested experimentally. The method presented here is useful for designing degenerate primers on collections of related protein sequences, allowing detection of new family members. PMID:23533783

  11. A new iterative Fourier transform algorithm for optimal design in holographic optical tweezers

    NASA Astrophysics Data System (ADS)

    Memmolo, P.; Miccio, L.; Merola, F.; Ferraro, P.; Netti, P. A.

    2012-06-01

    We propose a new Iterative Fourier Transform Algorithm (IFTA) capable to suppress ghost traps and noise in Holographic Optical Tweezers (HOT), maintaining a high diffraction efficiency in a computational time comparable with the others iterative algorithms. The process consists in the planning of the suitable ideal target of optical tweezers as input of classical IFTA and we show we are able to design up to 4 real traps, in the field of view imaged by the microscope objective, using an IFTA built on fictitious phasors, located in strategic positions in the Fourier plane. The effectiveness of the proposed algorithm is evaluated both for numerical and optical reconstructions and compared with the other techniques known in literature.

  12. International multidimensional authenticity specification (IMAS) algorithm for detection of commercial pomegranate juice adulteration.

    PubMed

    Zhang, Yanjun; Krueger, Dana; Durst, Robert; Lee, Rupo; Wang, David; Seeram, Navindra; Heber, David

    2009-03-25

    The pomegranate fruit ( Punica granatum ) has become an international high-value crop for the production of commercial pomegranate juice (PJ). The perceived consumer value of PJ is due in large part to its potential health benefits based on a significant body of medical research conducted with authentic PJ. To establish criteria for authenticating PJ, a new International Multidimensional Authenticity Specifications (IMAS) algorithm was developed through consideration of existing databases and comprehensive chemical characterization of 45 commercial juice samples from 23 different manufacturers in the United States. In addition to analysis of commercial juice samples obtained in the United States, data from other analyses of pomegranate juice and fruits including samples from Iran, Turkey, Azerbaijan, Syria, India, and China were considered in developing this protocol. There is universal agreement that the presence of a highly constant group of six anthocyanins together with punicalagins characterizes polyphenols in PJ. At a total sugar concentration of 16 degrees Brix, PJ contains characteristic sugars including mannitol at >0.3 g/100 mL. Ratios of glucose to mannitol of 4-15 and of glucose to fructose of 0.8-1.0 are also characteristic of PJ. In addition, no sucrose should be present because of isomerase activity during commercial processing. Stable isotope ratio mass spectrometry as > -25 per thousand assures that there is no added corn or cane sugar added to PJ. Sorbitol was present at <0.025 g/100 mL; maltose and tartaric acid were not detected. The presence of the amino acid proline at >25 mg/L is indicative of added grape products. Malic acid at >0.1 g/100 mL indicates adulteration with apple, pear, grape, cherry, plum, or aronia juice. Other adulteration methods include the addition of highly concentrated aronia, blueberry, or blackberry juices or natural grape pigments to poor-quality juices to imitate the color of pomegranate juice, which results in

  13. The multi-disciplinary design study: A life cycle cost algorithm

    NASA Technical Reports Server (NTRS)

    Harding, R. R.; Pichi, F. J.

    1988-01-01

    The approach and results of a Life Cycle Cost (LCC) analysis of the Space Station Solar Dynamic Power Subsystem (SDPS) including gimbal pointing and power output performance are documented. The Multi-Discipline Design Tool (MDDT) computer program developed during the 1986 study has been modified to include the design, performance, and cost algorithms for the SDPS as described. As with the Space Station structural and control subsystems, the LCC of the SDPS can be computed within the MDDT program as a function of the engineering design variables. Two simple examples of MDDT's capability to evaluate cost sensitivity and design based on LCC are included. MDDT was designed to accept NASA's IMAT computer program data as input so that IMAT's detailed structural and controls design capability can be assessed with expected system LCC as computed by MDDT. No changes to IMAT were required. Detailed knowledge of IMAT is not required to perform the LCC analyses as the interface with IMAT is noninteractive.

  14. Space Station Cathode Design, Performance, and Operating Specifications

    NASA Technical Reports Server (NTRS)

    Patterson, Michael J.; Verhey, Timothy R.; Soulas, George; Zakany, James

    1998-01-01

    A plasma contactor system was baselined for the International Space Station (ISS) to eliminate/mitigate damaging interactions with the space environment. The system represents a dual-use technology which is a direct outgrowth of the NASA electric propulsion program and, in particular, the technology development efforts on ion thruster systems. The plasma contactor includes a hollow cathode assembly (HCA), a power electronics unit, and a xenon gas feed system. Under a pre-flight development program, these subsystems were taken to the level of maturity appropriate for transfer to U.S. industry for final development. NASA's Lewis Research Center was subsequently requested by ISS to manufacture and deliver the engineering model, qualification model, and flight HCA units. To date, multiple units have been built. One cathode has demonstrated approximately 28,000 hours lifetime, two development unit HCAs have demonstrated over 10,000 hours lifetime, and one development unit HCA has demonstrated more than 32,000 ignitions. All 8 flight HCAs have been manufactured, acceptance tested, and are ready for delivery to the flight contractor. This paper discusses the requirements, mechanical design, performance, operating specifications, and schedule for the plasma contactor flight HCAs.

  15. Measurement of Spray Drift with a Specifically Designed Lidar System.

    PubMed

    Gregorio, Eduard; Torrent, Xavier; Planas de Martí, Santiago; Solanelles, Francesc; Sanz, Ricardo; Rocadenbosch, Francesc; Masip, Joan; Ribes-Dasi, Manel; Rosell-Polo, Joan R

    2016-01-01

    Field measurements of spray drift are usually carried out by passive collectors and tracers. However, these methods are labour- and time-intensive and only provide point- and time-integrated measurements. Unlike these methods, the light detection and ranging (lidar) technique allows real-time measurements, obtaining information with temporal and spatial resolution. Recently, the authors have developed the first eye-safe lidar system specifically designed for spray drift monitoring. This prototype is based on a 1534 nm erbium-doped glass laser and an 80 mm diameter telescope, has scanning capability, and is easily transportable. This paper presents the results of the first experimental campaign carried out with this instrument. High coefficients of determination (R² > 0.85) were observed by comparing lidar measurements of the spray drift with those obtained by horizontal collectors. Furthermore, the lidar system allowed an assessment of the drift reduction potential (DRP) when comparing low-drift nozzles with standard ones, resulting in a DRP of 57% (preliminary result) for the tested nozzles. The lidar system was also used for monitoring the evolution of the spray flux over the canopy and to generate 2-D images of these plumes. The developed instrument is an advantageous alternative to passive collectors and opens the possibility of new methods for field measurement of spray drift. PMID:27070613

  16. Measurement of Spray Drift with a Specifically Designed Lidar System

    PubMed Central

    Gregorio, Eduard; Torrent, Xavier; Planas de Martí, Santiago; Solanelles, Francesc; Sanz, Ricardo; Rocadenbosch, Francesc; Masip, Joan; Ribes-Dasi, Manel; Rosell-Polo, Joan R.

    2016-01-01

    Field measurements of spray drift are usually carried out by passive collectors and tracers. However, these methods are labour- and time-intensive and only provide point- and time-integrated measurements. Unlike these methods, the light detection and ranging (lidar) technique allows real-time measurements, obtaining information with temporal and spatial resolution. Recently, the authors have developed the first eye-safe lidar system specifically designed for spray drift monitoring. This prototype is based on a 1534 nm erbium-doped glass laser and an 80 mm diameter telescope, has scanning capability, and is easily transportable. This paper presents the results of the first experimental campaign carried out with this instrument. High coefficients of determination (R2 > 0.85) were observed by comparing lidar measurements of the spray drift with those obtained by horizontal collectors. Furthermore, the lidar system allowed an assessment of the drift reduction potential (DRP) when comparing low-drift nozzles with standard ones, resulting in a DRP of 57% (preliminary result) for the tested nozzles. The lidar system was also used for monitoring the evolution of the spray flux over the canopy and to generate 2-D images of these plumes. The developed instrument is an advantageous alternative to passive collectors and opens the possibility of new methods for field measurement of spray drift. PMID:27070613

  17. Digital IIR Filters Design Using Differential Evolution Algorithm with a Controllable Probabilistic Population Size

    PubMed Central

    Zhu, Wu; Fang, Jian-an; Tang, Yang; Zhang, Wenbing; Du, Wei

    2012-01-01

    Design of a digital infinite-impulse-response (IIR) filter is the process of synthesizing and implementing a recursive filter network so that a set of prescribed excitations results a set of desired responses. However, the error surface of IIR filters is usually non-linear and multi-modal. In order to find the global minimum indeed, an improved differential evolution (DE) is proposed for digital IIR filter design in this paper. The suggested algorithm is a kind of DE variants with a controllable probabilistic (CPDE) population size. It considers the convergence speed and the computational cost simultaneously by nonperiodic partial increasing or declining individuals according to fitness diversities. In addition, we discuss as well some important aspects for IIR filter design, such as the cost function value, the influence of (noise) perturbations, the convergence rate and successful percentage, the parameter measurement, etc. As to the simulation result, it shows that the presented algorithm is viable and comparable. Compared with six existing State-of-the-Art algorithms-based digital IIR filter design methods obtained by numerical experiments, CPDE is relatively more promising and competitive. PMID:22808191

  18. Design of State-Space-Based Control Algorithms for Wind Turbine Speed Regulation: Preprint

    SciTech Connect

    Wright, A.; Balas, M.

    2002-01-01

    Control can improve the performance of wind turbines by enhancing energy capture and reducing dynamic loads.At the National Renewable Energy Laboratory, we are beginning to design control algorithms for regulation of turbine speed and power using state-space control designs. In this paper, we describe the design of such a control algorithm for regulation of rotor speed in full-load operation (region 3) for a two-bladed wind turbine. We base our control design on simple linear models of a turbine, which contain rotor and generator rotation, drivetrain torsion, and rotor flap degrees of freedom (first mode only). We account for wind-speed fluctuations using disturbance-accommodating control. We show the capability of these control schemes to stabilize the modeled turbine modes via pole placement while using state estimation to reduce the number of turbine measurements that are needed for these control algorithms. We incorporate these controllers into the FAST-AD code and show simulation results for various conditions. Finally, we report conclusions to this work and outline future studies.

  19. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  20. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486

  1. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  2. Interactive evolutionary computation with minimum fitness evaluation requirement and offline algorithm design.

    PubMed

    Ishibuchi, Hisao; Sudo, Takahiko; Nojima, Yusuke

    2016-01-01

    In interactive evolutionary computation (IEC), each solution is evaluated by a human user. Usually the total number of examined solutions is very small. In some applications such as hearing aid design and music composition, only a single solution can be evaluated at a time by a human user. Moreover, accurate and precise numerical evaluation is difficult. Based on these considerations, we formulated an IEC model with the minimum requirement for fitness evaluation ability of human users under the following assumptions: They can evaluate only a single solution at a time, they can memorize only a single previous solution they have just evaluated, their evaluation result on the current solution is whether it is better than the previous one or not, and the best solution among the evaluated ones should be identified after a pre-specified number of evaluations. In this paper, we first explain our IEC model in detail. Next we propose a ([Formula: see text])ES-style algorithm for our IEC model. Then we propose an offline meta-level approach to automated algorithm design for our IEC model. The main feature of our approach is the use of a different mechanism (e.g., mutation, crossover, random initialization) to generate each solution to be evaluated. Through computational experiments on test problems, our approach is compared with the ([Formula: see text])ES-style algorithm where a solution generation mechanism is pre-specified and fixed throughout the execution of the algorithm. PMID:27026888

  3. Thermal design of spiral heat exchangers and heat pipes through global best algorithm

    NASA Astrophysics Data System (ADS)

    Turgut, Oğuz Emrah; Çoban, Mustafa Turhan

    2016-07-01

    This study deals with global best algorithm based thermal design of spiral heat exchangers and heat pipes. Spiral heat exchangers are devices which are highly efficient in extremely dirty and fouling process duties. Spirals inherent in design maintain high heat transfer coefficients while avoiding hazardous effects of fouling and uneven fluid distribution in the channels. Heat pipes have wide usage in industry. Thanks to the two phase cycle which takes part in operation, they can transfer high amount of heat with a negligible temperature gradient. In this work, a new stochastic based optimization method global best algorithm is applied for multi objective optimization of spiral heat exchangers as well as single objective optimization for heat pipes. Global best algorithm is easy-to-implement, free of derivatives and it can be reliably applied to any optimization problem. Case studies taken from the literature approaches are solved by the proposed algorithm and results obtained from the literature approaches are compared with thosed acquired by GBA. Comparisons reveal that GBA attains better results than literature studies in terms of solution accuracy and efficiency.

  4. SU-E-T-305: Study of the Eclipse Electron Monte Carlo Algorithm for Patient Specific MU Calculations

    SciTech Connect

    Wang, X; Qi, S; Agazaryan, N; DeMarco, J

    2014-06-01

    Purpose: To evaluate the Eclipse electron Monte Carlo (eMC) algorithm based on patient specific monitor unit (MU) calculations, and to propose a new factor which quantitatively predicts the discrepancy of MUs between the eMC algorithm and hand calculations. Methods: Electron treatments were planned for 61 patients on Eclipse (Version 10.0) using the eMC algorithm for Varian TrueBeam linear accelerators. For each patient, the same treatment beam angle was kept for a point dose calculation at dmax performed with the reference condition, which used an open beam with a 15×15 cm2 size cone and 100 SSD. A patient specific correction factor (PCF) was obtained by getting the ratio between this point dose and the calibration dose, which is 1 cGy per MU delivered at dmax. The hand calculation results were corrected by the PCFs and compared with MUs from the treatment plans. Results: The MU from the treatment plans were in average (7.1±6.1)% higher than the hand calculations. The average MU difference between the corrected hand calculations and the eMC treatment plans was (0.07±3.48)%. A correlation coefficient of 0.8 was found between (1-PCF) and the percentage difference between the treatment plan and hand calculations. Most outliers were treatment plans with small beam opening (< 4 cm) and low energy beams (6 and 9 MeV). Conclusion: For CT-based patient treatment plans, the eMC algorithm tends to generate a larger MU than hand calculations. Caution should be taken for eMC patient plans with small field sizes and low energy beams. We hypothesize that the PCF ratio reflects the influence of patient surface curvature and tissue inhomogeneity to patient specific percent depth dose (PDD) curve and MU calculations in eMC algorithm.

  5. Optimization of high speed pipelining in FPGA-based FIR filter design using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Botella, Guillermo; Romero, David E. T.; Kumm, Martin

    2012-06-01

    This paper compares FPGA-based full pipelined multiplierless FIR filter design options. Comparison of Distributed Arithmetic (DA), Common Sub-Expression (CSE) sharing and n-dimensional Reduced Adder Graph (RAG-n) multiplierless filter design methods in term of size, speed, and A*T product are provided. Since DA designs are table-based and CSE/RAG-n designs are adder-based, FPGA synthesis design data are used for a realistic comparison. Superior results of a genetic algorithm based optimization of pipeline registers and non-output fundamental coefficients are shown. FIR filters (posted as open source by Kastner et al.) for filters in the length from 6 to 151 coefficients are used.

  6. Multi-Objective Optimal Design of Switch Reluctance Motors Using Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehran; Rashidi, Farzan

    In this paper a design methodology based on multi objective genetic algorithm (MOGA) is presented to design the switched reluctance motors with multiple conflicting objectives such as efficiency, power factor, full load torque, and full load current, specified dimension, weight of cooper and iron and also manufacturing cost. The optimally designed motor is compared with an industrial motor having the same ratings. Results verify that the proposed method gives better performance for the multi-objective optimization problems. The results of optimal design show the reduction in the specified dimension, weight and manufacturing cost, and the improvement in the power factor, full load torque, and efficiency of the motor.A major advantage of the method is its quite short response time in obtaining the optimal design.

  7. Development and benefit analysis of a sector design algorithm for terminal dynamic airspace configuration

    NASA Astrophysics Data System (ADS)

    Sciandra, Vincent

    performance of the algorithm generated sectors to the current sectors for a variety of configurations and scenarios, and comparing these results to those of the current sectors. The effect of dynamic airspace configurations will then be tested by observing the effects of update rate on the algorithm generated sector results. Finally, the algorithm will be used with simulated data, whose evaluation would show the ability of the sector design algorithm to meet the objectives of the NextGen system. Upon validation, the algorithm may be successfully incorporated into a larger Terminal Flow Algorithm, developed by our partners at Mosaic ATM, as the final step in the TDAC process.

  8. Design And Implementation Of A Multi-Sensor Fusion Algorithm On A Hypercube Computer Architecture

    NASA Astrophysics Data System (ADS)

    Glover, Charles W.

    1990-03-01

    was obtained. This paper will also discuss the design of a completely parallel MSI algorithm.

  9. A mission-oriented orbit design method of remote sensing satellite for region monitoring mission based on evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Zhang, Jing; Yao, Huang

    2015-12-01

    Remote sensing satellites play an increasingly prominent role in environmental monitoring and disaster rescue. Taking advantage of almost the same sunshine condition to same place and global coverage, most of these satellites are operated on the sun-synchronous orbit. However, it brings some problems inevitably, the most significant one is that the temporal resolution of sun-synchronous orbit satellite can't satisfy the demand of specific region monitoring mission. To overcome the disadvantages, two methods are exploited: the first one is to build satellite constellation which contains multiple sunsynchronous satellites, just like the CHARTER mechanism has done; the second is to design non-predetermined orbit based on the concrete mission demand. An effective method for remote sensing satellite orbit design based on multiobjective evolution algorithm is presented in this paper. Orbit design problem is converted into a multi-objective optimization problem, and a fast and elitist multi-objective genetic algorithm is utilized to solve this problem. Firstly, the demand of the mission is transformed into multiple objective functions, and the six orbit elements of the satellite are taken as genes in design space, then a simulate evolution process is performed. An optimal resolution can be obtained after specified generation via evolution operation (selection, crossover, and mutation). To examine validity of the proposed method, a case study is introduced: Orbit design of an optical satellite for regional disaster monitoring, the mission demand include both minimizing the average revisit time internal of two objectives. The simulation result shows that the solution for this mission obtained by our method meet the demand the users' demand. We can draw a conclusion that the method presented in this paper is efficient for remote sensing orbit design.

  10. Development of an algorithm to provide awareness in choosing study designs for inclusion in systematic reviews of healthcare interventions: a method study

    PubMed Central

    Peinemann, Frank; Kleijnen, Jos

    2015-01-01

    Objectives To develop an algorithm that aims to provide guidance and awareness for choosing multiple study designs in systematic reviews of healthcare interventions. Design Method study: (1) To summarise the literature base on the topic. (2) To apply the integration of various study types in systematic reviews. (3) To devise decision points and outline a pragmatic decision tree. (4) To check the plausibility of the algorithm by backtracking its pathways in four systematic reviews. Results (1) The results of our systematic review of the published literature have already been published. (2) We recaptured the experience from our four previously conducted systematic reviews that required the integration of various study types. (3) We chose length of follow-up (long, short), frequency of events (rare, frequent) and types of outcome as decision points (death, disease, discomfort, disability, dissatisfaction) and aligned the study design labels according to the Cochrane Handbook. We also considered practical or ethical concerns, and the problem of unavailable high-quality evidence. While applying the algorithm, disease-specific circumstances and aims of interventions should be considered. (4) We confirmed the plausibility of the pathways of the algorithm. Conclusions We propose that the algorithm can assist to bring seminal features of a systematic review with multiple study designs to the attention of anyone who is planning to conduct a systematic review. It aims to increase awareness and we think that it may reduce the time burden on review authors and may contribute to the production of a higher quality review. PMID:26289450

  11. Attribute Index and Uniform Design Based Multiobjective Association Rule Mining with Evolutionary Algorithm

    PubMed Central

    Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption. PMID:23766683

  12. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  13. Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1995-01-01

    A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.

  14. An algorithm to design finite field multipliers using a self-dual normal basis

    NASA Technical Reports Server (NTRS)

    Wang, Charles C.

    1989-01-01

    The concept of using a self-dual normal basis to design the Massey-Omura finite-field multiplier is presented. An algorithm is given to locate a self-dual normal basis for GF(2m) for odd m. A method to construct the product function for designing the Massey-Omura multiplier is developed. It is shown that the construction of the product function based on a self-dual basis is simpler than that based on an arbitrary normal basis.

  15. Automated IDEF3 and IDEF4 systems design specification document

    NASA Technical Reports Server (NTRS)

    Friel, Patricia Griffith; Blinn, Thomas M.

    1989-01-01

    The current design is presented for the automated IDEF3 and IDEF4 tools. The philosophy is described behind the tool designs as well as the conceptual view of the interacting components of the two tools. Finally, a detailed description is presented of the existing designs for the tools using IDEF3 process descriptions and IDEF4 diagrams. In the preparation of these designs, the IDEF3 and IDEF4 methodologies were very effective in defining the structure and operation of the tools. The experience in designing systems in this fashion was very valuable and resulted in future systems being designed in this way. However, the number of IDEF3 and IDEF4 diagrams that were produced using a Macintosh for this document attest to the need for an automated tool to simplify this design process.

  16. Optimal Multitrial Prediction Combination and Subject-Specific Adaptation for Minimal Training Brain Switch Designs.

    PubMed

    Spyrou, Loukianos; Blokland, Yvonne; Farquhar, Jason; Bruhn, Jorgen

    2016-06-01

    Brain-Computer Interface (BCI) systems are traditionally designed by taking into account user-specific data to enable practical use. More recently, subject independent (SI) classification algorithms have been developed which bypass the subject specific adaptation and enable rapid use of the system. A brain switch is a particular BCI system where the system is required to distinguish from two separate mental tasks corresponding to the on-off commands of a switch. Such applications require a low false positive rate (FPR) while having an acceptable response time (RT) until the switch is activated. In this work, we develop a methodology that produces optimal brain switch behavior through subject specific (SS) adaptation of: a) a multitrial prediction combination model and b) an SI classification model. We propose a statistical model of combining classifier predictions that enables optimal FPR calibration through a short calibration session. We trained an SI classifier on a training synchronous dataset and tested our method on separate holdout synchronous and asynchronous brain switch experiments. Although our SI model obtained similar performance between training and holdout datasets, 86% and 85% for the synchronous and 69% and 66% for the asynchronous the between subject FPR and TPR variability was high (up to 62%). The short calibration session was then employed to alleviate that problem and provide decision thresholds that achieve when possible a target FPR=1% with good accuracy for both datasets. PMID:26529768

  17. Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Xiaowei; Slomka, Piotr J.; Diaz-Zamudio, Mariana; Germano, Guido; Berman, Daniel S.; Terzopoulos, Demetri; Dey, Damini

    2015-03-01

    Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.

  18. Direction-of-arrival interferometer array design using genetic algorithm with fuzzy logic

    NASA Astrophysics Data System (ADS)

    Straatveit, S. Nils

    2004-04-01

    Design of interferometer arrays for radio frequency direction of arrival estimation involves optimizing conflicting requirements. For example, high resolution conflicts with low cost. Lower level requirements also invoke lower level design issues such as ambiguity in direction of arrival angle. A more efficient array design process is described here, which uses a genetic algorithm with a growing genome and fuzzy logic scoring. Extensive simulation software is also needed. Simulation starts with randomized small array configurations. These are then evaluated against the fitness functions with results scored using fuzzy logic. The best-fit of the population are combined to produce the next generation. A mutation function introduces slight randomness in some genomes. Finally, if the overall population scores well the size of the genome is increased until final genome size is consistent with the desired array resolution requirement. The genetic algorithm design process described here produced a number of array designs. The results indicate discrete stages or steps in the optimization and an interesting trade-off of lower resolution for greater accuracy.

  19. Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data

    PubMed Central

    Pal, Doyel; Chen, Tingting; Khethavath, Praveen

    2014-01-01

    Background Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients’ privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. Objective To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. Methods To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. Results We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other’s database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error

  20. Trading accuracy for speed: A quantitative comparison of search algorithms in protein sequence design.

    PubMed

    Voigt, C A; Gordon, D B; Mayo, S L

    2000-06-01

    Finding the minimum energy amino acid side-chain conformation is a fundamental problem in both homology modeling and protein design. To address this issue, numerous computational algorithms have been proposed. However, there have been few quantitative comparisons between methods and there is very little general understanding of the types of problems that are appropriate for each algorithm. Here, we study four common search techniques: Monte Carlo (MC) and Monte Carlo plus quench (MCQ); genetic algorithms (GA); self-consistent mean field (SCMF); and dead-end elimination (DEE). Both SCMF and DEE are deterministic, and if DEE converges, it is guaranteed that its solution is the global minimum energy conformation (GMEC). This provides a means to compare the accuracy of SCMF and the stochastic methods. For the side-chain placement calculations, we find that DEE rapidly converges to the GMEC in all the test cases. The other algorithms converge on significantly incorrect solutions; the average fraction of incorrect rotamers for SCMF is 0.12, GA 0.09, and MCQ 0.05. For the protein design calculations, design positions are progressively added to the side-chain placement calculation until the time required for DEE diverges sharply. As the complexity of the problem increases, the accuracy of each method is determined so that the results can be extrapolated into the region where DEE is no longer tractable. We find that both SCMF and MCQ perform reasonably well on core calculations (fraction amino acids incorrect is SCMF 0.07, MCQ 0.04), but fail considerably on the boundary (SCMF 0.28, MCQ 0.32) and surface calculations (SCMF 0.37, MCQ 0.44). PMID:10835284

  1. Transportation network with fluctuating input/output designed by the bio-inspired Physarum algorithm.

    PubMed

    Watanabe, Shin; Takamatsu, Atsuko

    2014-01-01

    In this paper, we propose designing transportation network topology and traffic distribution under fluctuating conditions using a bio-inspired algorithm. The algorithm is inspired by the adaptive behavior observed in an amoeba-like organism, plasmodial slime mold, more formally known as plasmodium of Physarum plycephalum. This organism forms a transportation network to distribute its protoplasm, the fluidic contents of its cell, throughout its large cell body. In this process, the diameter of the transportation tubes adapts to the flux of the protoplasm. The Physarum algorithm, which mimics this adaptive behavior, has been widely applied to complex problems, such as maze solving and designing the topology of railroad grids, under static conditions. However, in most situations, environmental conditions fluctuate; for example, in power grids, the consumption of electric power shows daily, weekly, and annual periodicity depending on the lifestyles or the business needs of the individual consumers. This paper studies the design of network topology and traffic distribution with oscillatory input and output traffic flows. The network topology proposed by the Physarum algorithm is controlled by a parameter of the adaptation process of the tubes. We observe various rich topologies such as complete mesh, partial mesh, Y-shaped, and V-shaped networks depending on this adaptation parameter and evaluate them on the basis of three performance functions: loss, cost, and vulnerability. Our results indicate that consideration of the oscillatory conditions and the phase-lags in the multiple outputs of the network is important: The building and/or maintenance cost of the network can be reduced by introducing the oscillating condition, and when the phase-lag among the outputs is large, the transportation loss can also be reduced. We use stability analysis to reveal how the system exhibits various topologies depending on the parameter. PMID:24586616

  2. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  3. Design and Evaluation of a Dynamic Programming Flight Routing Algorithm Using the Convective Weather Avoidance Model

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Grabbe, Shon; Mukherjee, Avijit

    2010-01-01

    The optimization of traffic flows in congested airspace with varying convective weather is a challenging problem. One approach is to generate shortest routes between origins and destinations while meeting airspace capacity constraint in the presence of uncertainties, such as weather and airspace demand. This study focuses on development of an optimal flight path search algorithm that optimizes national airspace system throughput and efficiency in the presence of uncertainties. The algorithm is based on dynamic programming and utilizes the predicted probability that an aircraft will deviate around convective weather. It is shown that the running time of the algorithm increases linearly with the total number of links between all stages. The optimal routes minimize a combination of fuel cost and expected cost of route deviation due to convective weather. They are considered as alternatives to the set of coded departure routes which are predefined by FAA to reroute pre-departure flights around weather or air traffic constraints. A formula, which calculates predicted probability of deviation from a given flight path, is also derived. The predicted probability of deviation is calculated for all path candidates. Routes with the best probability are selected as optimal. The predicted probability of deviation serves as a computable measure of reliability in pre-departure rerouting. The algorithm can also be extended to automatically adjust its design parameters to satisfy the desired level of reliability.

  4. Algorithm architecture co-design for ultra low-power image sensor

    NASA Astrophysics Data System (ADS)

    Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.

    2012-03-01

    In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.

  5. Design and simulation of imaging algorithm for Fresnel telescopy imaging system

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-yu; Liu, Li-ren; Yan, Ai-min; Sun, Jian-feng; Dai, En-wen; Li, Bing

    2011-06-01

    Fresnel telescopy (short for Fresnel telescopy full-aperture synthesized imaging ladar) is a new high resolution active laser imaging technique. This technique is a variant of Fourier telescopy and optical scanning holography, which uses Fresnel zone plates to scan target. Compare with synthetic aperture imaging ladar(SAIL), Fresnel telescopy avoids problem of time synchronization and space synchronization, which decreasing technical difficulty. In one-dimensional (1D) scanning operational mode for moving target, after time-to-space transformation, spatial distribution of sampling data is non-uniform because of the relative motion between target and scanning beam. However, as we use fast Fourier transform (FFT) in the following imaging algorithm of matched filtering, distribution of data should be regular and uniform. We use resampling interpolation to transform the data into two-dimensional (2D) uniform distribution, and accuracy of resampling interpolation process mainly affects the reconstruction results. Imaging algorithms with different resampling interpolation algorithms have been analysis and computer simulation are also given. We get good reconstruction results of the target, which proves that the designed imaging algorithm for Fresnel telescopy imaging system is effective. This work is found to have substantial practical value and offers significant benefit for high resolution imaging system of Fresnel telescopy laser imaging ladar.

  6. The design and hardware implementation of a low-power real-time seizure detection algorithm

    NASA Astrophysics Data System (ADS)

    Raghunathan, Shriram; Gupta, Sumeet K.; Ward, Matthew P.; Worth, Robert M.; Roy, Kaushik; Irazoqui, Pedro P.

    2009-10-01

    Epilepsy affects more than 1% of the world's population. Responsive neurostimulation is emerging as an alternative therapy for the 30% of the epileptic patient population that does not benefit from pharmacological treatment. Efficient seizure detection algorithms will enable closed-loop epilepsy prostheses by stimulating the epileptogenic focus within an early onset window. Critically, this is expected to reduce neuronal desensitization over time and lead to longer-term device efficacy. This work presents a novel event-based seizure detection algorithm along with a low-power digital circuit implementation. Hippocampal depth-electrode recordings from six kainate-treated rats are used to validate the algorithm and hardware performance in this preliminary study. The design process illustrates crucial trade-offs in translating mathematical models into hardware implementations and validates statistical optimizations made with empirical data analyses on results obtained using a real-time functioning hardware prototype. Using quantitatively predicted thresholds from the depth-electrode recordings, the auto-updating algorithm performs with an average sensitivity and selectivity of 95.3 ± 0.02% and 88.9 ± 0.01% (mean ± SEα = 0.05), respectively, on untrained data with a detection delay of 8.5 s [5.97, 11.04] from electrographic onset. The hardware implementation is shown feasible using CMOS circuits consuming under 350 nW of power from a 250 mV supply voltage from simulations on the MIT 180 nm SOI process.

  7. Design of aspherical surfaces for panoramic imagers using multi-populations genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Li-Ping; Liang, Zhong-zhu; Jin, Chun-Shui

    2009-05-01

    A design method of aspherical surface for panoramic imaging system with two mirrors using multi-populations genetic algorithms is proposed. Astigmatism induced by mirrors may significantly compromise image resolution. To solve this problem, we induced algebraic expression of astigmatism in panoramic imager based on generalized Coddington equation and theory of geometric optics. Then, we propose an optimization process for mirror profile design to eliminate astigmatism and provide purposely-designed projection formula with aid of MPGA. A series of polynomial expressions of aspherical surfaces are obtained and procedures of the design are presented. In order to facilitate ray tracing and aberration calculation, even asphere surface model is obtained by using of hybrid schemes combining MPGA and damped least squares. Finally, a prototype of the catadioptric panoramic imager has been developed and panoramic ring image is obtained.

  8. An algorithm to design finite field multipliers using a self-dual normal basis

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1987-01-01

    Finite field multiplication is central in the implementation of some error-correcting coders. Massey and Omura have presented a revolutionary design for multiplication in a finite field. In their design, a normal base is utilized to represent the elements of the field. The concept of using a self-dual normal basis to design the Massey-Omura finite field multiplier is presented. Presented first is an algorithm to locate a self-dual normal basis for GF(2 sup m) for odd m. Then a method to construct the product function for designing the Massey-Omura multiplier is developed. It is shown that the construction of the product function base on a self-dual basis is simpler than that based on an arbitrary normal base.

  9. Genetic algorithm based design optimization of a permanent magnet brushless dc motor

    NASA Astrophysics Data System (ADS)

    Upadhyay, P. R.; Rajagopal, K. R.

    2005-05-01

    Genetic algorithm (GA) based design optimization of a permanent magnet brushless dc motor is presented in this paper. A 70 W, 350 rpm, ceiling fan motor with radial-filed configuration is designed by considering the efficiency as the objective function. Temperature-rise and motor weight are the constraints and the slot electric loading, magnet-fraction, slot-fraction, airgap, and airgap flux density are the design variables. The efficiency and the phase-inductance of the motor designed using the developed CAD program are improved by using the GA based optimization technique; from 84.75% and 5.55 mH to 86.06% and 2.4 mH, respectively.

  10. Valuing the Child Health Utility 9D: Using profile case best worst scaling methods to develop a new adolescent specific scoring algorithm.

    PubMed

    Ratcliffe, Julie; Huynh, Elisabeth; Chen, Gang; Stevens, Katherine; Swait, Joffre; Brazier, John; Sawyer, Michael; Roberts, Rachel; Flynn, Terry

    2016-05-01

    In contrast to the recent proliferation of studies incorporating ordinal methods to generate health state values from adults, to date relatively few studies have utilised ordinal methods to generate health state values from adolescents. This paper reports upon a study to apply profile case best worst scaling methods to derive a new adolescent specific scoring algorithm for the Child Health Utility 9D (CHU9D), a generic preference based instrument that has been specifically designed for the estimation of quality adjusted life years for the economic evaluation of health care treatment and preventive programs targeted at young people. A survey was developed for administration in an on-line format in which consenting community based Australian adolescents aged 11-17 years (N = 1982) indicated the best and worst features of a series of 10 health states derived from the CHU9D descriptive system. The data were analyzed using latent class conditional logit models to estimate values (part worth utilities) for each level of the nine attributes relating to the CHU9D. A marginal utility matrix was then estimated to generate an adolescent-specific scoring algorithm on the full health = 1 and dead = 0 scale required for the calculation of QALYs. It was evident that different decision processes were being used in the best and worst choices. Whilst respondents appeared readily able to choose 'best' attribute levels for the CHU9D health states, a large amount of random variability and indeed different decision rules were evident for the choice of 'worst' attribute levels, to the extent that the best and worst data should not be pooled from the statistical perspective. The optimal adolescent-specific scoring algorithm was therefore derived using data obtained from the best choices only. The study provides important insights into the use of profile case best worst scaling methods to generate health state values with adolescent populations. PMID:27060541

  11. An Optimal Design Methodology of Tapered Roller Bearings Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Tiwari, Rajiv; Sunil, Kumar K.; Reddy, R. S.

    2012-03-01

    In the design of tapered roller bearings, long life is the one of the most important criterion. The design of bearings has to satisfy constraints of geometry and strength, while operating at its rated speed. An optimal design methodology is needed to achieve this objective (i.e., the maximization of the fatigue life). The fatigue life is directly proportional to the dynamic capacity; hence, for the present case, the latter has been chosen as the objective function. It has been optimized by using a constrained nonlinear formulation with real-coded genetic algorithms. Design variables for the bearing include four geometrical parameters: the bearing pitch diameter, the diameter of the roller, the effective length of the roller, and the number of rollers. These directly affect the dynamic capacity of tapered roller bearings. In addition to these, another five design constraint constants are included, which indirectly affect the basic dynamic capacity of tapered roller bearings. The five design constraint constants have been given bounds based on the parametric studies through initial optimization runs. There is good agreement between the optimized and standard bearings in respect to the basic dynamic capacity. A convergence study has been carried out to ensure the global optimum point in the design. A sensitivity analysis of various design parameters, using the Monte Carlo simulation method, has been performed to see changes in the dynamic capacity. Illustrations show that none of the geometric design parameters have adverse affect on the dynamic capacity.

  12. Design specifications for manufacturability of MCM-C multichip modules

    SciTech Connect

    Blazek, R.; Desch, J.; Kautz, D.; Morgenstern, H.

    1996-10-01

    A comprehensive guide for ceramic-based multichip modules (MCMS) has been developed by AlliedSignal Federal Manufacturing & Technologies (FM&T) to provide manufacturability information for its customers about how MCM designs can be affected by existing process and equipment capabilities. This guide extends beyond a listing of design rules by providing information about design layout, low- temperature cofired ceramic (LTCC) substrate fabrication, MCM assembly and electrical testing Electrical mechanical packaging, environmental, and producibility issues are reviewed. Examples of three MCM designs are shown in the form of packaging cross-sectional views, LTCC substrate layer allocations, and overall MCM photographs. The guide has proven to be an effective tool for enhancing communications between MCM designers and manufacturers and producing a microcircuit that meets design requirements within the limitations of process capabilities.

  13. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  14. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  15. A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train

    PubMed Central

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582

  16. A high precision position sensor design and its signal processing algorithm for a maglev train.

    PubMed

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582

  17. A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks

    PubMed Central

    Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza

    2015-01-01

    Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower’s problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach. PMID:26102502

  18. Electro-optic Q-switch driver design specifics

    NASA Astrophysics Data System (ADS)

    Melnikov, Konstantin

    2010-07-01

    Different schematic designs of Q-Switch Drivers for Pockels Cell based optical arrangement are considered. Schematic solutions of Q-Switch driver design are analyzed. Marx Bank based Generator and High Voltage Switch Schematics are compared. Parameters of constructed Q-Switch Drivers are presented.

  19. Electro-optic Q-switch driver design specifics

    NASA Astrophysics Data System (ADS)

    Melnikov, Konstantin

    2011-03-01

    Different schematic designs of Q-Switch Drivers for Pockels Cell based optical arrangement are considered. Schematic solutions of Q-Switch driver design are analyzed. Marx Bank based Generator and High Voltage Switch Schematics are compared. Parameters of constructed Q-Switch Drivers are presented.

  20. A requirements specification for a software design support system

    NASA Technical Reports Server (NTRS)

    Noonan, Robert E.

    1988-01-01

    Most existing software design systems (SDSS) support the use of only a single design methodology. A good SDSS should support a wide variety of design methods and languages including structured design, object-oriented design, and finite state machines. It might seem that a multiparadigm SDSS would be expensive in both time and money to construct. However, it is proposed that instead an extensible SDSS that directly implements only minimal database and graphical facilities be constructed. In particular, it should not directly implement tools to faciliate language definition and analysis. It is believed that such a system could be rapidly developed and put into limited production use, with the experience gained used to refine and evolve the systems over time.

  1. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  2. Binary particle swarm optimization algorithm assisted to design of plasmonic nanospheres sensor

    NASA Astrophysics Data System (ADS)

    Kaboli, Milad; Akhlaghi, Majid; Shahmirzaee, Hossein

    2016-04-01

    In this study, a coherent perfect absorption (CPA)-type sensor based on plasmonic nanoparticles is proposed. It consists of a plasmonic nanospheres array on top of a quartz substrate. The refractive index changes above the sensor surface, which is due to the appearance of gas or the absorption of biomolecules, can be detected by measuring the resulting spectral shifts of the absorption coefficient. Since the CPA efficiency depends strongly on the number of plasmonic nanoparticles and the locations of nanoparticles, binary particle swarm optimization (BPSO) algorithm is used to design an optimized array of the plasmonic nanospheres. This optimized structure should be maximizing the absorption coefficient only in the one frequency. BPSO algorithm, a swarm of birds including a matrix with binary entries responsible for controlling nanospheres in the array, shows the presence with symbol of ('1') and the absence with ('0'). The sensor can be used for sensing both gas and low refractive index materials in an aqueous environment.

  3. Low PMEPR OFDM Radar Waveform Design Using the Iterative Least Squares Algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Tianyao; Zhao, Tong

    2015-11-01

    This letter considers waveform design of orthogonal frequency division multiplexing (OFDM) signal for radar applications, and aims at mitigating the envelope fluctuation in OFDM. A novel method is proposed to reduce the peak-to-mean envelope power ratio (PMEPR), which is commonly used to evaluate the fluctuation. The proposed method is based on the tone reservation approach, in which some bits or subcarriers of OFDM are allocated for decreasing PMEPR. We introduce the coefficient of variation of envelopes (CVE) as the cost function for waveform optimization, and develop an iterative least squares algorithm. Minimizing CVE leads to distinct PMEPR reduction, and it is guaranteed that the cost function monotonically decreases by applying the iterative algorithm. Simulations demonstrate that the envelope is significantly smoothed by the proposed method.

  4. 78 FR 28258 - mPower\\TM\\ Design-Specific Review Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-14

    ... COMMISSION mPower\\TM\\ Design-Specific Review Standard AGENCY: Nuclear Regulatory Commission. ACTION: Design-Specific Review Standard (DSRS) for the mPower\\TM\\ Design; request for comment. SUMMARY: The U.S. Nuclear... the mPower\\TM\\ design (mPower\\TM\\ DSRS). The purpose of the mPower\\TM\\ DSRS is to more fully...

  5. 78 FR 52804 - mPower\\TM\\ Design-Specific Review Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-26

    ... COMMISSION mPower\\TM\\ Design-Specific Review Standard AGENCY: Nuclear Regulatory Commission. ACTION: Design-Specific Review Standard (DSRS) for the mPower\\TM\\ Design; re-opening of comment period. SUMMARY: On May 14... for the mPower\\TM\\ design (mPower\\TM\\ DSRS). The purpose of the mPower\\TM\\ DSRS is to more...

  6. NASA software specification and evaluation system design, part 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The research to develop methods for reducing the effort expended in software and verification is reported. The development of a formal software requirements methodology, a formal specifications language, a programming language, a language preprocessor, and code analysis tools are discussed.

  7. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... cause formation of static electricity. (e) A monitor must be designed to operate in each plane that forms an angle of 22.5° with the plane of its normal operating position. (f) Each monitor must...

  8. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    PubMed

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The

  9. Computational thermodynamics, Gaussian processes and genetic algorithms: combined tools to design new alloys

    NASA Astrophysics Data System (ADS)

    Tancret, F.

    2013-06-01

    A new alloy design procedure is proposed, combining in a single computational tool several modelling and predictive techniques that have already been used and assessed in the field of materials science and alloy design: a genetic algorithm is used to optimize the alloy composition for target properties and performance on the basis of the prediction of mechanical properties (estimated by Gaussian process regression of data on existing alloys) and of microstructural constitution, stability and processability (evaluated by computational themodynamics). These tools are integrated in a unique Matlab programme. An example is given in the case of the design of a new nickel-base superalloy for future power plant applications (such as the ultra-supercritical (USC) coal-fired plant, or the high-temperature gas-cooled nuclear reactor (HTGCR or HTGR), where the selection criteria include cost, oxidation and creep resistance around 750 °C, long-term stability at service temperature, forgeability, weldability, etc.

  10. New Detection Systems of Bacteria Using Highly Selective Media Designed by SMART: Selective Medium-Design Algorithm Restricted by Two Constraints

    PubMed Central

    Kawanishi, Takeshi; Shiraishi, Takuya; Okano, Yukari; Sugawara, Kyoko; Hashimoto, Masayoshi; Maejima, Kensaku; Komatsu, Ken; Kakizawa, Shigeyuki; Yamaji, Yasuyuki; Hamamoto, Hiroshi; Oshima, Kenro; Namba, Shigetou

    2011-01-01

    Culturing is an indispensable technique in microbiological research, and culturing with selective media has played a crucial role in the detection of pathogenic microorganisms and the isolation of commercially useful microorganisms from environmental samples. Although numerous selective media have been developed in empirical studies, unintended microorganisms often grow on such media probably due to the enormous numbers of microorganisms in the environment. Here, we present a novel strategy for designing highly selective media based on two selective agents, a carbon source and antimicrobials. We named our strategy SMART for highly Selective Medium-design Algorithm Restricted by Two constraints. To test whether the SMART method is applicable to a wide range of microorganisms, we developed selective media for Burkholderia glumae, Acidovorax avenae, Pectobacterium carotovorum, Ralstonia solanacearum, and Xanthomonas campestris. The series of media developed by SMART specifically allowed growth of the targeted bacteria. Because these selective media exhibited high specificity for growth of the target bacteria compared to established selective media, we applied three notable detection technologies: paper-based, flow cytometry-based, and color change-based detection systems for target bacteria species. SMART facilitates not only the development of novel techniques for detecting specific bacteria, but also our understanding of the ecology and epidemiology of the targeted bacteria. PMID:21304596

  11. Hydraulic design of a low-specific speed Francis runner for a hydraulic cooling tower

    NASA Astrophysics Data System (ADS)

    Ruan, H.; Luo, X. Q.; Liao, W. L.; Zhao, Y. P.

    2012-11-01

    The air blower in a cooling tower is normally driven by an electromotor, and the electric energy consumed by the electromotor is tremendous. The remaining energy at the outlet of the cooling cycle is considerable. This energy can be utilized to drive a hydraulic turbine and consequently to rotate the air blower. The purpose of this project is to recycle energy, lower energy consumption and reduce pollutant discharge. Firstly, a two-order polynomial is proposed to describe the blade setting angle distribution law along the meridional streamline in the streamline equation. The runner is designed by the point-to-point integration method with a specific blade setting angle distribution. Three different ultra-low-specificspeed Francis runners with different wrap angles are obtained in this method. Secondly, based on CFD numerical simulations, the effects of blade setting angle distribution on pressure coefficient distribution and relative efficiency have been analyzed. Finally, blade angles of inlet and outlet and control coefficients of blade setting angle distribution law are optimal variables, efficiency and minimum pressure are objective functions, adopting NSGA-II algorithm, a multi-objective optimization for ultra-low-specific speed Francis runner is carried out. The obtained results show that the optimal runner has higher efficiency and better cavitation performance.

  12. Reprogramming homing endonuclease specificity through computational design and directed evolution.

    PubMed

    Thyme, Summer B; Boissel, Sandrine J S; Arshiya Quadri, S; Nolan, Tony; Baker, Dean A; Park, Rachel U; Kusak, Lara; Ashworth, Justin; Baker, David

    2014-02-01

    Homing endonucleases (HEs) can be used to induce targeted genome modification to reduce the fitness of pathogen vectors such as the malaria-transmitting Anopheles gambiae and to correct deleterious mutations in genetic diseases. We describe the creation of an extensive set of HE variants with novel DNA cleavage specificities using an integrated experimental and computational approach. Using computational modeling and an improved selection strategy, which optimizes specificity in addition to activity, we engineered an endonuclease to cleave in a gene associated with Anopheles sterility and another to cleave near a mutation that causes pyruvate kinase deficiency. In the course of this work we observed unanticipated context-dependence between bases which will need to be mechanistically understood for reprogramming of specificity to succeed more generally. PMID:24270794

  13. Adaptive filter design based on the LMS algorithm for delay elimination in TCR/FC compensators.

    PubMed

    Hooshmand, Rahmat Allah; Torabian Esfahani, Mahdi

    2011-04-01

    Thyristor controlled reactor with fixed capacitor (TCR/FC) compensators have the capability of compensating reactive power and improving power quality phenomena. Delay in the response of such compensators degrades their performance. In this paper, a new method based on adaptive filters (AF) is proposed in order to eliminate delay and increase the response of the TCR compensator. The algorithm designed for the adaptive filters is performed based on the least mean square (LMS) algorithm. In this design, instead of fixed capacitors, band-pass LC filters are used. To evaluate the filter, a TCR/FC compensator was used for nonlinear and time varying loads of electric arc furnaces (EAFs). These loads caused occurrence of power quality phenomena in the supplying system, such as voltage fluctuation and flicker, odd and even harmonics and unbalancing in voltage and current. The above design was implemented in a realistic system model of a steel complex. The simulation results show that applying the proposed control in the TCR/FC compensator efficiently eliminated delay in the response and improved the performance of the compensator in the power system. PMID:21193194

  14. Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak

    2010-01-01

    Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions

  15. Formal design specification of a Processor Interface Unit

    NASA Technical Reports Server (NTRS)

    Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.

    1992-01-01

    This report describes work to formally specify the requirements and design of a processor interface unit (PIU), a single-chip subsystem providing memory-interface bus-interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance-free operation, or both. The need for high-quality design assurance in such applications is an undisputed fact, given the disastrous consequences that even a single design flaw can produce. Thus, the further development and application of formal methods to fault-tolerant systems is of critical importance as these systems see increasing use in modern society.

  16. Reliability Optimization Design for Contact Springs of AC Contactors Based on Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng; Su, Xiuping; Wu, Ziran; Xu, Chengwen

    The paper illustrates the procedure of reliability optimization modeling for contact springs of AC contactors under nonlinear multi-constraint conditions. The adaptive genetic algorithm (AGA) is utilized to perform reliability optimization on the contact spring parameters of a type of AC contactor. A method that changes crossover and mutation rates at different times in the AGA can effectively avoid premature convergence, and experimental tests are performed after optimization. The experimental result shows that the mass of each optimized spring is reduced by 16.2%, while the reliability increases to 99.9% from 94.5%. The experimental result verifies the correctness and feasibility of this reliability optimization designing method.

  17. Binary TLBO algorithm assisted for designing plasmonic nano bi-pyramids-based absorption coefficient

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Majid; Emami, Farzin; Nozhat, Najmeh

    2014-07-01

    A new efficient binary optimization method based on Teaching-Learning-Based Optimization (TLBO) algorithm is proposed to design an array of plasmonic nano bi-pyramids in order to achieve maximum absorption coefficient spectrum. In binary TLBO, a group of learners consisting of a matrix with binary entries controls the presence ('1') or the absence ('0') of nanoparticles in the array. Simulation results show that absorption coefficient strongly depends on the localized position of plasmonic nanoparticles. Non-periodic structures have more appropriate response in term of absorption coefficient. This approach is useful in optical applications such as solar cells and plasmonic nano antenna.

  18. Specifications for a COM Catalog Designed for Government Documents.

    ERIC Educational Resources Information Center

    Copeland, Nora S.; And Others

    Prepared in MARC format in accordance with the Ohio College Library Center (OCLC) standards, these specifications were developed at Colorado State University to catalog a group of government publications not listed in the Monthly Catalog of United States Publications. The resulting microfiche catalog produced through the OCLC Cataloging Subsystem…

  19. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  20. As-built design specification for segment map (Sgmap) program

    NASA Technical Reports Server (NTRS)

    Tompkins, M. A. (Principal Investigator)

    1981-01-01

    The segment map program (SGMAP), which is part of the CLASFYT package, is described in detail. This program is designed to output symbolic maps or numerical dumps from LANDSAT cluster/classification files or aircraft ground truth/processed ground truth files which are in 'universal' format.

  1. Requirement Specifications for a Design and Verification Unit.

    ERIC Educational Resources Information Center

    Pelton, Warren G.; And Others

    A research and development activity to introduce new and improved education and training technology into Bureau of Medicine and Surgery training is recommended. The activity, called a design and verification unit, would be administered by the Education and Training Sciences Department. Initial research and development are centered on the…

  2. Multi-Stage Hybrid Rocket Conceptual Design for Micro-Satellites Launch using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kitagawa, Yosuke; Kitagawa, Koki; Nakamiya, Masaki; Kanazaki, Masahiro; Shimada, Toru

    The multi-objective genetic algorithm (MOGA) is applied to the multi-disciplinary conceptual design problem for a three-stage launch vehicle (LV) with a hybrid rocket engine (HRE). MOGA is an optimization tool used for multi-objective problems. The parallel coordinate plot (PCP), which is a data mining method, is employed in the post-process in MOGA for design knowledge discovery. A rocket that can deliver observing micro-satellites to the sun-synchronous orbit (SSO) is designed. It consists of an oxidizer tank containing liquid oxidizer, a combustion chamber containing solid fuel, a pressurizing tank and a nozzle. The objective functions considered in this study are to minimize the total mass of the rocket and to maximize the ratio of the payload mass to the total mass. To calculate the thrust and the engine size, the regression rate is estimated based on an empirical model for a paraffin (FT-0070) propellant. Several non-dominated solutions are obtained using MOGA, and design knowledge is discovered for the present hybrid rocket design problem using a PCP analysis. As a result, substantial knowledge on the design of an LV with an HRE is obtained for use in space transportation.

  3. The extended PP1 toolkit: designed to create specificity

    PubMed Central

    Bollen, Mathieu; Peti, Wolfgang; Ragusa, Michael J.; Beullens, Monique

    2011-01-01

    Protein Ser/Thr phosphatase-1 (PP1) catalyzes the majority of eukaryotic protein dephosphorylation reactions in a highly regulated and selective manner. Recent studies have identified an unusually diversified PP1 interactome with the properties of a regulatory toolkit. PP1-interacting proteins (PIPs) function as targeting subunits, substrates and/or inhibitors. As targeting subunits, PIPs contribute to substrate selection by bringing PP1 into the vicinity of specific substrates and by modulating substrate specificity via additional substrate docking sites or blocking substrate-binding channels. Many of the nearly 200 established mammalian PIPs are predicted to be intrinsically disordered, a property that facilitates their binding to a large surface area of PP1 via multiple docking motifs. These novel insights offer perspectives for the therapeutic targeting of PP1 by interfering with the binding of PIPs or substrates. PMID:20399103

  4. Effect of Selection of Design Parameters on the Optimization of a Horizontal Axis Wind Turbine via Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Alpman, Emre

    2014-06-01

    The effect of selecting the twist angle and chord length distributions on the wind turbine blade design was investigated by performing aerodynamic optimization of a two-bladed stall regulated horizontal axis wind turbine. Twist angle and chord length distributions were defined using Bezier curve using 3, 5, 7 and 9 control points uniformly distributed along the span. Optimizations performed using a micro-genetic algorithm with populations composed of 5, 10, 15, 20 individuals showed that, the number of control points clearly affected the outcome of the process; however the effects were different for different population sizes. The results also showed the superiority of micro-genetic algorithm over a standard genetic algorithm, for the selected population sizes. Optimizations were also performed using a macroevolutionary algorithm and the resulting best blade design was compared with that yielded by micro-genetic algorithm.

  5. Optimal design of viscous damper connectors for adjacent structures using genetic algorithm and Nelder-Mead algorithm

    NASA Astrophysics Data System (ADS)

    Bigdeli, Kasra; Hare, Warren; Tesfamariam, Solomon

    2012-04-01

    Passive dampers can be used to connect two adjacent structures in order to mitigate earthquakes induced pounding damages. Theoretical and experimental studies have confirmed efficiency and applicability of various connecting devices, such as viscous damper, MR damper, etc. However, few papers employed optimization methods to find the optimal mechanical properties of the dampers, and in most papers, dampers are assumed to be uniform. In this study, we optimized the optimal damping coefficients of viscous dampers considering a general case of non-uniform damping coefficients. Since the derivatives of objective function to damping coefficients are not known, to optimize damping coefficients, a heuristic search method, i.e. the genetic algorithm, is employed. Each structure is modeled as a multi degree of freedom dynamic system consisting of lumped-masses, linear springs and dampers. In order to examine dynamic behavior of the structures, simulations in frequency domain are carried out. A pseudo-excitation based on Kanai-Tajimi spectrum is used as ground acceleration. The optimization results show that relaxing the uniform dampers coefficient assumption generates significant improvement in coupling effectiveness. To investigate efficiency of genetic algorithm, solution quality and solution time of genetic algorithm are compared with those of Nelder-Mead algorithm.

  6. Septa design for a prostate specific PET camera

    SciTech Connect

    Qi, Jinyi; Huber, Jennifer S.; Huesman, Ronald H.; Moses, William W.; Derenzo, Stephen E.; Budinger, Thomas F.

    2003-11-15

    The recent development of new prostate tracers has motivated us to build a low cost PET camera optimized to image the prostate. Coincidence imaging of positron emitters is achieved using a pair of external curved detector banks. The bottom bank is fixed below the patient bed, and the top bank moves upward for patient access and downward for maximum sensitivity. In this paper, we study the design of septa for the prostate camera using Monte Carlo simulations. The system performance is measured by the detectability of a prostate lesion. We have studied 17 septa configurations. The results show that the design of septa has a large impact on the lesion detection at a given activity concentration. Significant differences are also observed between the lesion detectability and the conventional noise equivalent count (NEC) performance, indicating that the NEC is not appropriate for the detection task.

  7. A review of design issues specific to hypersonic flight vehicles

    NASA Astrophysics Data System (ADS)

    Sziroczak, D.; Smith, H.

    2016-07-01

    This paper provides an overview of the current technical issues and challenges associated with the design of hypersonic vehicles. Two distinct classes of vehicles are reviewed; Hypersonic Transports and Space Launchers, their common features and differences are examined. After a brief historical overview, the paper takes a multi-disciplinary approach to these vehicles, discusses various design aspects, and technical challenges. Operational issues are explored, including mission profiles, current and predicted markets, in addition to environmental effects and human factors. Technological issues are also reviewed, focusing on the three major challenge areas associated with these vehicles: aerothermodynamics, propulsion, and structures. In addition, matters of reliability and maintainability are also presented. The paper also reviews the certification and flight testing of these vehicles from a global perspective. Finally the current stakeholders in the field of hypersonic flight are presented, summarizing the active programs and promising concepts.

  8. NASIS data base management system: IBM 360 TSS implementation. Volume 4: Program design specifications

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The design specifications for the programs and modules within the NASA Aerospace Safety Information System (NASIS) are presented. The purpose of the design specifications is to standardize the preparation of the specifications and to guide the program design. Each major functional module within the system is a separate entity for documentation purposes. The design specifications contain a description of, and specifications for, all detail processing which occurs in the module. Sub-models, reference tables, and data sets which are common to several modules are documented separately.

  9. NASIS data base management system - IBM 360/370 OS MVT implementation. 4: Program design specifications

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The design specifications for the programs and modules within the NASA Aerospace Safety Information System (NASIS) are presented. The purpose of the design specifications is to standardize the preparation of the specifications and to guide the program design. Each major functional module within the system is a separate entity for documentation purposes. The design specifications contain a description of, and specifications for, all detail processing which occurs in the module. Sub-modules, reference tables, and data sets which are common to several modules are documented separately.

  10. Design of an automated algorithm for labeling cardiac blood pool in gated SPECT images of radiolabeled red blood cells

    SciTech Connect

    Hebert, T.J. |; Moore, W.H.; Dhekne, R.D.; Ford, P.V.; Wendt, J.A.; Murphy, P.H.; Ting, Y.

    1996-08-01

    The design of an automated computer algorithm for labeling the cardiac blood pool within gated 3-D reconstructions of the radiolabeled red blood cells is investigated. Due to patient functional abnormalities, limited resolution, and noise, certain spatial and temporal features of the cardiac blood pool that one would anticipate finding in every study are not present in certain frames or with certain patients. The labeling of the cardiac blood pool requires an algorithm that only relies upon features present in all patients. The authors investigate the design of a fully-automated region growing algorithm for this purpose.

  11. A candidate-set-free algorithm for generating D-optimal split-plot designs

    PubMed Central

    Jones, Bradley; Goos, Peter

    2007-01-01

    We introduce a new method for generating optimal split-plot designs. These designs are optimal in the sense that they are efficient for estimating the fixed effects of the statistical model that is appropriate given the split-plot design structure. One advantage of the method is that it does not require the prior specification of a candidate set. This makes the production of split-plot designs computationally feasible in situations where the candidate set is too large to be tractable. The method allows for flexible choice of the sample size and supports inclusion of both continuous and categorical factors. The model can be any linear regression model and may include arbitrary polynomial terms in the continuous factors and interaction terms of any order. We demonstrate the usefulness of this flexibility with a 100-run polypropylene experiment involving 11 factors where we found a design that is substantially more efficient than designs that are produced by using other approaches. PMID:21197132

  12. Fuzzy rule base design using tabu search algorithm for nonlinear system modeling.

    PubMed

    Bagis, Aytekin

    2008-01-01

    This paper presents an approach to fuzzy rule base design using tabu search algorithm (TSA) for nonlinear system modeling. TSA is used to evolve the structure and the parameter of fuzzy rule base. The use of the TSA, in conjunction with a systematic neighbourhood structure for the determination of fuzzy rule base parameters, leads to a significant improvement in the performance of the model. To demonstrate the effectiveness of the presented method, several numerical examples given in the literature are examined. The results obtained by means of the identified fuzzy rule bases are compared with those belonging to other modeling approaches in the literature. The simulation results indicate that the method based on the use of a TSA performs an important and very effective modeling procedure in fuzzy rule base design in the modeling of the nonlinear or complex systems. PMID:17945233

  13. Selecting training and test images for optimized anomaly detection algorithms in hyperspectral imagery through robust parameter design

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2011-06-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques have been applied to some of these algorithms in an attempt to choose robust settings capable of operating consistently across a large variety of image scenes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research developed a frameworkfor optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. This paper describes a method for selecting hyperspectral image training and test subsets yielding consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. Several different mathematical models representing the value of a training and test set based on such measures as the D-optimal score and various distance norms are tested in a simulation experiment.

  14. Designing genetic algorithm for efficient calculation of value encoding in time-lapse gravity inversion

    NASA Astrophysics Data System (ADS)

    Wahyudi, Eko Januari

    2013-09-01

    As advancing application of soft computation technique in oil and gas industry, Genetic Algorithm (GA) also shows contribution in geophysical inverse problems in order to achieve better results and efficiency in computational process. In this paper, I would like to show the progress of my work in inverse modeling of time-lapse gravity data uses value encoding with alphabet formulation. The alphabet formulation designed to provide solution of characterization positive density change (+Δρ) and negative density change (-Δρ) respect to reference value (0 gr/cc). The inversion that utilize discrete model parameter, computed with GA as optimization algorithm. The challenge working with GA is take long time computational process, so the step in designing GA in this paper described through evaluation on GA operators performance test. The performances of several combinations of GA operators (selection, crossover, mutation, and replacement) tested with synthetic model in single-layer reservoir. Analysis on sufficient number of samples shows combination of SUS-MPCO-QSA/G-ND as the most promising results. Quantitative solution with more confidence level to characterize sharp boundary of density change zones was conducted with average calculation of sufficient model samples.

  15. Multi-criteria optimal pole assignment robust controller design for uncertainty systems using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Sarjaš, Andrej; Chowdhury, Amor; Svečko, Rajko

    2016-09-01

    This paper presents the synthesis of an optimal robust controller design using the polynomial pole placement technique and multi-criteria optimisation procedure via an evolutionary computation algorithm - differential evolution. The main idea of the design is to provide a reliable fixed-order robust controller structure and an efficient closed-loop performance with a preselected nominally characteristic polynomial. The multi-criteria objective functions have quasi-convex properties that significantly improve convergence and the regularity of the optimal/sub-optimal solution. The fundamental aim of the proposed design is to optimise those quasi-convex functions with fixed closed-loop characteristic polynomials, the properties of which are unrelated and hard to present within formal algebraic frameworks. The objective functions are derived from different closed-loop criteria, such as robustness with metric ?∞, time performance indexes, controller structures, stability properties, etc. Finally, the design results from the example verify the efficiency of the controller design and also indicate broader possibilities for different optimisation criteria and control structures.

  16. Performance of the Lidar Design and Data Algorithms for the GLAS Global Cloud and Aerosol Measurements

    NASA Technical Reports Server (NTRS)

    Spinhirne, James D.; Palm, Stephen P.; Hlavka, Dennis L.; Hart, William D.

    2007-01-01

    The Geoscience Laser Altimeter System (GLAS) launched in early 2003 is the first polar orbiting satellite lidar. The instrument design includes high performance observations of the distribution and optical scattering cross sections of atmospheric clouds and aerosol. The backscatter lidar operates at two wavelengths, 532 and 1064 nm. For the atmospheric cloud and aerosol measurements, the 532 nm channel was designed for ultra high efficiency with solid state photon counting detectors and etalon filtering. Data processing algorithms were developed to calibrate and normalize the signals and produce global scale data products of the height distribution of cloud and aerosol layers and their optical depths and particulate scattering cross sections up to the limit of optical attenuation. The paper will concentrate on the effectiveness and limitations of the lidar channel design and data product algorithms. Both atmospheric receiver channels meet and exceed their design goals. Geiger Mode Avalanche Photodiode modules are used for the 532 nm signal. The operational experience is that some signal artifacts and non-linearity require correction in data processing. As with all photon counting detectors, a pulse-pile-up calibration is an important aspect of the measurement. Additional signal corrections were found to be necessary relating to correction of a saturation signal-run-on effect and also for daytime data, a small range dependent variation in the responsivity. It was possible to correct for these signal errors in data processing and achieve the requirement to accurately profile aerosol and cloud cross section down to 10-7 llm-sr. The analysis procedure employs a precise calibration against molecular scattering in the mid-stratosphere. The 1064 nm channel detection employs a high-speed analog APD for surface and atmospheric measurements where the detection sensitivity is limited by detector noise and is over an order of magnitude less than at 532 nm. A unique feature of

  17. Advanced Free Flight Planner and Dispatcher's Workstation: Preliminary Design Specification

    NASA Technical Reports Server (NTRS)

    Wilson, J.; Wright, C.; Couluris, G. J.

    1997-01-01

    The National Aeronautics and Space Administration (NASA) has implemented the Advanced Air Transportation Technology (AATT) program to investigate future improvements to the national and international air traffic management systems. This research, as part of the AATT program, developed preliminary design requirements for an advanced Airline Operations Control (AOC) dispatcher's workstation, with emphasis on flight planning. This design will support the implementation of an experimental workstation in NASA laboratories that would emulate AOC dispatch operations. The work developed an airline flight plan data base and specified requirements for: a computer tool for generation and evaluation of free flight, user preferred trajectories (UPT); the kernel of an advanced flight planning system to be incorporated into the UPT-generation tool; and an AOC workstation to house the UPT-generation tool and to provide a real-time testing environment. A prototype for the advanced flight plan optimization kernel was developed and demonstrated. The flight planner uses dynamic programming to search a four-dimensional wind and temperature grid to identify the optimal route, altitude and speed for successive segments of a flight. An iterative process is employed in which a series of trajectories are successively refined until the LTPT is identified. The flight planner is designed to function in the current operational environment as well as in free flight. The free flight environment would enable greater flexibility in UPT selection based on alleviation of current procedural constraints. The prototype also takes advantage of advanced computer processing capabilities to implement more powerful optimization routines than would be possible with older computer systems.

  18. Loop tuning with specification on gain and phase margins via modified second-order sliding mode control algorithm

    NASA Astrophysics Data System (ADS)

    Boiko, I. M.

    2012-01-01

    The modified second-order sliding mode algorithm is used for controller tuning. Namely, the modified suboptimal algorithm-based test (modified SOT) and non-parametric tuning rules for proportional-integral-derivative (PID) controllers are presented in this article. In the developed method of test and tuning, the idea of coordinated selection of the test parameters and the controller tuning parameters is introduced. The proposed approach allows for the formulation of simple non-parametric tuning rules for PID controllers that provide desired amplitude or phase margins exactly. In the modified SOT, the frequency of the self-excited oscillations can be generated equal to either the phase crossover frequency or the magnitude crossover frequency of the open-loop system frequency response (including a future PID controller) - depending on the tuning method choice. The first option will provide tuning with specification on gain margin, and the second option will ensure tuning with specification on phase margin. Tuning rules for a PID controller and simulation examples are provided.

  19. A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Chae, Han Gil

    Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the

  20. GPQuest: A Spectral Library Matching Algorithm for Site-Specific Assignment of Tandem Mass Spectra to Intact N-glycopeptides.

    PubMed

    Toghi Eshghi, Shadi; Shah, Punit; Yang, Weiming; Li, Xingde; Zhang, Hui

    2015-01-01

    Glycoprotein changes occur in not only protein abundance but also the occupancy of each glycosylation site by different glycoforms during biological or pathological processes. Recent advances in mass spectrometry instrumentation and techniques have facilitated analysis of intact glycopeptides in complex biological samples by allowing the users to generate spectra of intact glycopeptides with glycans attached to each specific glycosylation site. However, assigning these spectra, leading to identification of the glycopeptides, is challenging. Here, we report an algorithm, named GPQuest, for site-specific identification of intact glycopeptides using higher-energy collisional dissociation (HCD) fragmentation of complex samples. In this algorithm, a spectral library of glycosite-containing peptides in the sample was built by analyzing the isolated glycosite-containing peptides using HCD LC-MS/MS. Spectra of intact glycopeptides were selected by using glycan oxonium ions as signature ions for glycopeptide spectra. These oxonium-ion-containing spectra were then compared with the spectral library generated from glycosite-containing peptides, resulting in assignment of each intact glycopeptide MS/MS spectrum to a specific glycosite-containing peptide. The glycan occupying each glycosite was determined by matching the mass difference between the precursor ion of intact glycopeptide and the glycosite-containing peptide to a glycan database. Using GPQuest, we analyzed LC-MS/MS spectra of protein extracts from prostate tumor LNCaP cells. Without enrichment of glycopeptides from global tryptic peptides and at a false discovery rate of 1%, 1008 glycan-containing MS/MS spectra were assigned to 769 unique intact N-linked glycopeptides, representing 344 N-linked glycosites with 57 different N-glycans. Spectral library matching using GPQuest assigns the HCD LC-MS/MS generated spectra of intact glycopeptides in an automated and high-throughput manner. Additionally, spectral library

  1. Vector fuzzy control iterative algorithm for the design of sub-wavelength diffractive optical elements for beam shaping

    NASA Astrophysics Data System (ADS)

    Lin, Yong; Hu, Jiasheng; Wu, Kenan

    2009-08-01

    The vector fuzzy control iterative algorithm (VFCIA) is proposed for the design of phase-only sub-wavelength diffractive optical elements (SWDOEs) for beam shaping. The vector diffraction model put forward by Mansuripur is applied to relate the field distributions between the SWDOE plane and the output plane. Fuzzy control theory is used to decide the constraint method for each iterative process of the algorithm. We have designed a SWDOE that transforms a circular flat-top beam to a square irradiance pattern. Computer design results show that the SWDOE designed by the VFCIA can produce better results than the vector iterative algorithm (VIA). And the finite difference time-domain method (FDTD), a rigorous electromagnetic analysis technique, is used to analyze the designed SWDOE for further confirming the validity of the proposed method.

  2. An ultrasound-guided fluorescence tomography system: design and specification

    NASA Astrophysics Data System (ADS)

    D'Souza, Alisha V.; Flynn, Brendan P.; Kanick, Stephen C.; Torosean, Sason; Davis, Scott C.; Maytin, Edward V.; Hasan, Tayyaba; Pogue, Brian W.

    2013-03-01

    An ultrasound-guided fluorescence molecular tomography system is under development for in vivo quantification of Protoporphyrin IX (PpIX) during Aminolevulinic Acid - Photodynamic Therapy (ALA-PDT) of Basal Cell Carcinoma. The system is designed to combine fiber-based spectral sampling of PPIX fluorescence emission with co-registered ultrasound images to quantify local fluorophore concentration. A single white light source is used to provide an estimate of the bulk optical properties of tissue. Optical data is obtained by sequential illumination of a 633nm laser source at 4 linear locations with parallel detection at 5 locations interspersed between the sources. Tissue regions from segmented ultrasound images, optical boundary data, white light-informed optical properties and diffusion theory are used to estimate the fluorophore concentration in these regions. Our system and methods allow interrogation of both superficial and deep tissue locations up to PpIX concentrations of 0.025ug/ml.

  3. Heat pipe design handbook, part 2. [digital computer code specifications

    NASA Technical Reports Server (NTRS)

    Skrabek, E. A.

    1972-01-01

    The utilization of a digital computer code for heat pipe analysis and design (HPAD) is described which calculates the steady state hydrodynamic heat transport capability of a heat pipe with a particular wick configuration, the working fluid being a function of wick cross-sectional area. Heat load, orientation, operating temperature, and heat pipe geometry are specified. Both one 'g' and zero 'g' environments are considered, and, at the user's option, the code will also perform a weight analysis and will calculate heat pipe temperature drops. The central porous slab, circumferential porous wick, arterial wick, annular wick, and axial rectangular grooves are the wick configurations which HPAD has the capability of analyzing. For Vol. 1, see N74-22569.

  4. Optimization of Spherical Roller Bearing Design Using Artificial Bee Colony Algorithm and Grid Search Method

    NASA Astrophysics Data System (ADS)

    Tiwari, Rajiv; Waghole, Vikas

    2015-07-01

    Bearing standards impose restrictions on the internal geometry of spherical roller bearings. Geometrical and strength constraints conditions have been formulated for the optimization of bearing design. The long fatigue life is one of the most important criteria in the optimum design of bearing. The life is directly proportional to the dynamic capacity; hence, the objective function has been chosen as the maximization of dynamic capacity. The effect of speed and static loads acting on the bearing are also taken into account. Design variables for the bearing include five geometrical parameters: the roller diameter, the roller length, the bearing pitch diameter, the number of rollers, and the contact angle. There are a few design constraint parameters which are also included in the optimization, the bounds of which are obtained by initial runs of the optimization. The optimization program is made to run for different values of these design constraint parameters and a range of the parameters is obtained for which the objective function has a higher value. The artificial bee colony algorithm (ABCA) has been used to solve the constrained optimized problem and the optimum design is compared with the one obtained from the grid search method (GSM), both operating independently. Both the ABCA and the GSM have been finally combined together to reach the global optimum point. A constraint violation study has also been carried out to give priority to the constraint having greater possibility of violations. Optimized bearing designs show a better performance parameter with those specified in bearing catalogs. The sensitivity analysis of bearing parameters has also been carried out to see the effect of manufacturing tolerance on the objective function.

  5. The TOMS V9 Algorithm for OMPS Nadir Mapper Total Ozone: An Enhanced Design That Ensures Data Continuity

    NASA Astrophysics Data System (ADS)

    Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.

    2015-12-01

    The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.

  6. Structure Design of the 3-D Braided Composite Based on a Hybrid Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ke

    Three-dimensional braided composite has the better designable characteristic. Whereas wide application of hollow-rectangular-section three-dimensional braided composite in engineering, optimization design of the three-dimensional braided composite made by 4-step method were introduced. Firstly, the stiffness and damping characteristic analysis of the composite is presented. Then, the mathematical models for structure design of the three-dimensional braided composite were established. The objective functions are based on the specific damping capacity and stiffness of the composite. The design variables are the braiding parameters of the composites and sectional geometrical size of the composite. The optimization problem is solved by using ant colony optimization (ACO), contenting the determinate restriction. The results of numeral examples show that the better damping and stiffness characteristic could be obtained. The method proposed here is useful for the structure design of the kind of member and its engineering application.

  7. The Table of Specifications: A Tool for Instructional Design and Development.

    ERIC Educational Resources Information Center

    Dills, Charles R.

    1998-01-01

    Tables of specifications provide graphic representations of objectives and segments of instruction or test questions. Examines tables of specifications (use, significance of empty cells, traceability) and their application in instructional design, highlighting constructivism and microworlds, structural communication, computer mediated…

  8. GADIS: Algorithm for designing sequences to achieve target secondary structure profiles of intrinsically disordered proteins.

    PubMed

    Harmon, Tyler S; Crabtree, Michael D; Shammas, Sarah L; Posey, Ammon E; Clarke, Jane; Pappu, Rohit V

    2016-09-01

    Many intrinsically disordered proteins (IDPs) participate in coupled folding and binding reactions and form alpha helical structures in their bound complexes. Alanine, glycine, or proline scanning mutagenesis approaches are often used to dissect the contributions of intrinsic helicities to coupled folding and binding. These experiments can yield confounding results because the mutagenesis strategy changes the amino acid compositions of IDPs. Therefore, an important next step in mutagenesis-based approaches to mechanistic studies of coupled folding and binding is the design of sequences that satisfy three major constraints. These are (i) achieving a target intrinsic alpha helicity profile; (ii) fixing the positions of residues corresponding to the binding interface; and (iii) maintaining the native amino acid composition. Here, we report the development of a G: enetic A: lgorithm for D: esign of I: ntrinsic secondary S: tructure (GADIS) for designing sequences that satisfy the specified constraints. We describe the algorithm and present results to demonstrate the applicability of GADIS by designing sequence variants of the intrinsically disordered PUMA system that undergoes coupled folding and binding to Mcl-1. Our sequence designs span a range of intrinsic helicity profiles. The predicted variations in sequence-encoded mean helicities are tested against experimental measurements. PMID:27503953

  9. Genetic Algorithm for the Design of Electro-Mechanical Sigma Delta Modulator MEMS Sensors

    PubMed Central

    Wilcock, Reuben; Kraft, Michael

    2011-01-01

    This paper describes a novel design methodology using non-linear models for complex closed loop electro-mechanical sigma-delta modulators (EMΣΔM) that is based on genetic algorithms and statistical variation analysis. The proposed methodology is capable of quickly and efficiently designing high performance, high order, closed loop, near-optimal systems that are robust to sensor fabrication tolerances and electronic component variation. The use of full non-linear system models allows significant higher order non-ideal effects to be taken into account, improving accuracy and confidence in the results. To demonstrate the effectiveness of the approach, two design examples are presented including a 5th order low-pass EMΣΔM for a MEMS accelerometer, and a 6th order band-pass EMΣΔM for the sense mode of a MEMS gyroscope. Each example was designed using the system in less than one day, with very little manual intervention. The strength of the approach is verified by SNR performances of 109.2 dB and 92.4 dB for the low-pass and band-pass system respectively, coupled with excellent immunities to fabrication tolerances and parameter mismatch. PMID:22163691

  10. Optimal design of multichannel fiber Bragg grating filters using Pareto multi-objective optimization algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Liu, Tundong; Jiang, Hao

    2016-01-01

    A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.

  11. Multidisciplinary design optimization of vehicle instrument panel based on multi-objective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Wu, Guangqiang

    2013-03-01

    Typical multidisciplinary design optimization(MDO) has gradually been proposed to balance performances of lightweight, noise, vibration and harshness(NVH) and safety for instrument panel(IP) structure in the automotive development. Nevertheless, plastic constitutive relation of Polypropylene(PP) under different strain rates, has not been taken into consideration in current reliability-based and collaborative IP MDO design. In this paper, based on tensile test under different strain rates, the constitutive relation of Polypropylene material is studied. Impact simulation tests for head and knee bolster are carried out to meet the regulation of FMVSS 201 and FMVSS 208, respectively. NVH analysis is performed to obtain mainly the natural frequencies and corresponding mode shapes, while the crashworthiness analysis is employed to examine the crash behavior of IP structure. With the consideration of lightweight, NVH, head and knee bolster impact performance, design of experiment(DOE), response surface model(RSM), and collaborative optimization(CO) are applied to realize the determined and reliability-based optimizations, respectively. Furthermore, based on multi-objective genetic algorithm(MOGA), the optimal Pareto sets are completed to solve the multi-objective optimization(MOO) problem. The proposed research ensures the smoothness of Pareto set, enhances the ability of engineers to make a comprehensive decision about multi-objectives and choose the optimal design, and improves the quality and efficiency of MDO.

  12. Liquid Engine Design: Effect of Chamber Dimensions on Specific Impulse

    NASA Technical Reports Server (NTRS)

    Hoggard, Lindsay; Leahy, Joe

    2009-01-01

    Which assumption of combustion chemistry - frozen or equilibrium - should be used in the prediction of liquid rocket engine performance calculations? Can a correlation be developed for this? A literature search using the LaSSe tool, an online repository of old rocket data and reports, was completed. Test results of NTO/Aerozine-50 and Lox/LH2 subscale and full-scale injector and combustion chamber test results were found and studied for this task. NASA code, Chemical Equilibrium with Applications (CEA) was used to predict engine performance using both chemistry assumptions, defined here. Frozen- composition remains frozen during expansion through the nozzle. Equilibrium- instantaneous chemical equilibrium during nozzle expansion. Chamber parameters were varied to understand what dimensions drive chamber C* and Isp. Contraction Ratio is the ratio of the nozzle throat area to the area of the chamber. L is the length of the chamber. Characteristic chamber length, L*, is the length that the chamber would be if it were a straight tube and had no converging nozzle. Goal: Develop a qualitative and quantitative correlation for performance parameters - Specific Impulse (Isp) and Characteristic Velocity (C*) - as a function of one or more chamber dimensions - Contraction Ratio (CR), Chamber Length (L ) and/or Characteristic Chamber Length (L*). Determine if chamber dimensions can be correlated to frozen or equilibrium chemistry.

  13. Psychosocial Risks Generated By Assets Specific Design Software

    NASA Astrophysics Data System (ADS)

    Remus, Furtună; Angela, Domnariu; Petru, Lazăr

    2015-07-01

    The human activity concerning an occupation is resultant from the interaction between the psycho-biological, socio-cultural and organizational-occupational factors. Tehnological development, automation and computerization that are to be found in all the branches of activity, the level of speed in which things develop, as well as reaching their complexity, require less and less physical aptitudes and more cognitive qualifications. The person included in the work process is bound in most of the cases to come in line with the organizational-occupational situations that are specific to the demands of the job. The role of the programmer is essencial in the process of execution of ordered softwares, thus the truly brilliant ideas can only come from well-rested minds, concentrated on their tasks. The actual requirements of the jobs, besides the high number of benefits and opportunities, also create a series of psycho-social risks, which can increase the level of stress during work activity, especially for those who work under pressure.

  14. Specification and preliminary design of an array processor

    NASA Technical Reports Server (NTRS)

    Slotnick, D. L.; Graham, M. L.

    1975-01-01

    The design of a computer suited to the class of problems typified by the general circulation of the atmosphere was investigated. A fundamental goal was that the resulting machine should have roughly 100 times the computing capability of an IBM 360/95 computer. A second requirement was that the machine should be programmable in a higher level language similar to FORTRAN. Moreover, the new machine would have to be compatible with the IBM 360/95 since the IBM machine would continue to be used for pre- and post-processing. A third constraint was that the cost of the new machine was to be significantly less than that of other extant machines of similar computing capability, such as the ILLIAC IV and CDC STAR. A final constraint was that it should be feasible to fabricate a complete system and put it in operation by early 1978. Although these objectives were generally met, considerable work remains to be done on the routing system.

  15. Theory and algorithms for a quasi-optical launcher design method for high-frequency gyrotrons

    NASA Astrophysics Data System (ADS)

    Ungku Farid, Ungku Fazri

    Gyrotrons are vacuum tubes that can generate high amounts of coherent high-frequency microwave radiation used for plasma heating, breakdown and current drive, and other applications. The gyrotron output power is not directly usable, and must be converted to either a free-space circular TEM00 Gaussian beam or a HE11 corrugated waveguide mode by employing mode converters. Quasi-optical mode converters (QOMC) achieve this by utilizing a launcher (a type of waveguide antenna) and a mirror system. Adding perturbations to smooth-wall launchers can produce a better Gaussian shaped radiation pattern with smaller side lobes and less diffraction, and this improvement leads to higher power efficiency in the QOMC. The oversize factor (OF) is defined as the ratio of the operating to cutoff frequency of the launcher, and the higher this value is, the more difficult it is to obtain good launcher designs. This thesis presents a new method for the design of any perturbed-wall TE 0n launcher that is not too highly oversized, and it is an improvement over previous launcher design methods that do not work well for highly oversized launchers. This new launcher design method is a fusion of three different methods, which are the Iterative Stratton-Chu algorithm (used for fast and accurate waveguide field propagations), the Katsenelenbaum-Semenov phase-correcting optimization algorithm, and Geometrical Optics. Three different TE02 launchers were designed using this new method, 1) a highly oversized (2.49 OF) 60 GHz launcher as proof-of-method, 2) a highly oversized (2.66 OF) 28 GHz launcher for possible use in the quasihelically symmetric stellarator (HSX) transmission line at the University of Wisconsin -- Madison, and 3) a compact internal 94 GHz 1.54 OF launcher for use in a compact gyrotron. Good to excellent results were achieved, and all launcher designs were independently verified with Surf3d, a method-of-moments based software. Additionally, the corresponding mirror system for

  16. Fuzzy logic control algorithms for MagneShock semiactive vehicle shock absorbers: design and experimental evaluations

    NASA Astrophysics Data System (ADS)

    Craft, Michael J.; Buckner, Gregory D.; Anderson, Richard D.

    2003-07-01

    Automotive ride quality and handling performance remain challenging design tradeoffs for modern, passive automobile suspension systems. Despite extensive published research outlining the benefits of active vehicle suspensions in addressing this tradeoff, the cost and complexity of these systems frequently prohibit commercial adoption. Semi-active suspensions can provide performance benefits over passive suspensions without the cost and complexity associated with fully active systems. This paper outlines the development and experimental evaluation of a fuzzy logic control algorithm for a commercial semi-active suspension component, Carrera's MagneShockTM shock absorber. The MagneShockTM utilizes an electromagnet to change the viscosity of magnetorheological (MR) fluid, which changes the damping characteristics of the shock. Damping for each shock is controlled by manipulating the coil current using real-time algorithms. The performance capabilities of fuzzy logic control (FLC) algorithms are demonstrated through experimental evaluations on a passenger vehicle. Results show reductions of 25% or more in sprung mass absorbed power (U.S. Army 6 Watt Absorbed Power Criterion) as compared to typical passive shock absorbers over urban terrains in both simulation and experimentation. Average sprung-mass RMS accelerations were also reduced by as much as 9%, but usually with an increase in total suspension travel over the passive systems. Additionally, a negligible decrease in RMS tire normal force was documented through computer simulations. And although the FLC absorbed power was comparable to that of the fixed-current MagneShockTM the FLC revealed reduced average RMS sprung-mass accelerations over the fixed-current MagneShocks by 2-9%. Possible means for improvement of this system include reducing the suspension spring stiffness and increasing the dynamic damping range of the MagneShockTM.

  17. Overall plant design specification Modular High Temperature Gas-cooled Reactor. Revision 9

    SciTech Connect

    1990-05-01

    Revision 9 of the ``Overall Plant Design Specification Modular High Temperature Gas-Cooled Reactor,`` DOE-HTGR-86004 (OPDS) has been completed and is hereby distributed for use by the HTGR Program team members. This document, Revision 9 of the ``Overall Plant Design Specification`` (OPDS) reflects those changes in the MHTGR design requirements and configuration resulting form approved Design Change Proposals DCP BNI-003 and DCP BNI-004, involving the Nuclear Island Cooling and Spent Fuel Cooling Systems respectively.

  18. Brief Report: exploratory analysis of the ADOS revised algorithm: specificity and predictive value with Hispanic children referred for autism spectrum disorders.

    PubMed

    Overton, Terry; Fielding, Cheryl; de Alba, Roman Garcia

    2008-07-01

    This study compared Autism diagnostic observation schedule (ADOS) algorithm scores of a sample of 26 children who were administered modules 1-3 of the ADOS with the scores obtained applying the revised ADOS algorithm proposed by Gotham et al. (2007). Results of this application were inconsistent, yielding slightly more accurate results for module 1. New algorithm scores on modules 2 and 3 remained consistent with the original algorithm scores. The Mann-Whitney U was applied to compare revised algorithm and clinical levels of social impairment to determine if significant differences were evident. Results of Mann-Whitney U analyses were inconsistent and demonstrated less specificity for children with milder levels of social impairment. The revised algorithm demonstrated accuracy for the more severe autistic group. PMID:18026872

  19. Byte structure variable length coding (BS-VLC): a new specific algorithm applied in the compression of trajectories generated by molecular dynamics

    PubMed

    Melo; Puga; Gentil; Brito; Alves; Ramos

    2000-05-01

    Molecular dynamics is a well-known technique very much used in the study of biomolecular systems. The trajectory files produced by molecular dynamics simulations are extensive, and the classical lossless algorithms give poor efficiencies in their compression. In this work, a new specific algorithm, named byte structure variable length coding (BS-VLC), is introduced. Trajectory files, obtained by molecular dynamics applied to trypsin and a trypsin:pancreatic trypsin inhibitor complex, were compressed using four classical lossless algorithms (Huffman, adaptive Huffman, LZW, and LZ77) as well as the BS-VLC algorithm. The results obtained show that BS-VLC nearly triplicates the compression efficiency of the best classical lossless algorithm, preserving a near lossless behavior. Compression efficiencies close to 50% can be obtained with a high degree of precision, and the maximum efficiency possible (75%), within this algorithm, can be performed with good precision. PMID:10850759

  20. Design and evaluation of basic standard encryption algorithm modules using nanosized complementary metal oxide semiconductor molecular circuits

    NASA Astrophysics Data System (ADS)

    Masoumi, Massoud; Raissi, Farshid; Ahmadian, Mahmoud; Keshavarzi, Parviz

    2006-01-01

    We are proposing that the recently proposed semiconductor-nanowire-molecular architecture (CMOL) is an optimum platform to realize encryption algorithms. The basic modules for the advanced encryption standard algorithm (Rijndael) have been designed using CMOL architecture. The performance of this design has been evaluated with respect to chip area and speed. It is observed that CMOL provides considerable improvement over implementation with regular CMOS architecture even with a 20% defect rate. Pseudo-optimum gate placement and routing are provided for Rijndael building blocks and the possibility of designing high speed, attack tolerant and long key encryptions are discussed.

  1. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  2. Software design specification. Part 2: Orbital Flight Test (OFT) detailed design specification. Volume 3: Applications. Book 2: System management

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The functions performed by the systems management (SM) application software are described along with the design employed to accomplish these functions. The operational sequences (OPS) control segments and the cyclic processes they control are defined. The SM specialist function control (SPEC) segments and the display controlled 'on-demand' processes that are invoked by either an OPS or SPEC control segment as a direct result of an item entry to a display are included. Each processing element in the SM application is described including an input/output table and a structured control flow diagram. The flow through the module and other information pertinent to that process and its interfaces to other processes are included.

  3. Earth Observatory Satellite system definition study. Report no. 5: System design and specifications. Part 1: Observatory system element specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The performance, design, and quality assurance requirements for the Earth Observatory Satellite (EOS) Observatory and Ground System program elements required to perform the Land Resources Management (LRM) A-type mission are presented. The requirements for the Observatory element with the exception of the instruments specifications are contained in the first part.

  4. Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2013-03-01

    Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison

  5. Cold header machine process monitoring using a genetic algorithm designed neural network approach

    NASA Astrophysics Data System (ADS)

    dos Reis, Henrique L. M.; Voegele, Aaron C.; Cook, David B.

    1999-12-01

    In cold heading manufacturing processes, complete or partial fracture of the punch-pin leads to production of out-of-tolerance parts. A process monitoring system has been developed to assure that out-of-tolerance parts do not contaminate the batch of acceptable parts. A four-channel data acquisition system was assembled to collect and store the acoustic signal generated during the manufacturing process. A genetic algorithm was designed to select the smallest subset of waveform features necessary to develop a robust artificial neural network that could differentiate among the various cold head machine conditions, including complete or partial failure of the punch pin. The developed monitoring system is able to terminate production within seconds of punch pin failure using only four waveform features.

  6. Optimum design of vortex generator elements using Kriging surrogate modelling and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Neelakantan, Rithwik; Balu, Raman; Saji, Abhinav

    Vortex Generators (VG's) are small angled plates located in a span wise fashion aft of the leading edge of an aircraft wing. They control airflow over the upper surface of the wing by creating vortices which energise the boundary layer. The parameters considered for the optimisation study of the VG's are its height, orientation angle and location along the chord in a low subsonic flow over a NACA0012 airfoil. The objective function to be maximised is the L/D ratio of the airfoil. The design data are generated using the commercially available ANSYS FLUENT software and are modelled using a Kriging based interpolator. This surrogate model is used along with a Generic Algorithm software to arrive at the optimum shape of the VG's. The results of this study will be confirmed with actual wind tunnel tests on scaled models.

  7. Design of Pipeline Multiplier Based on Modified Booth's Algorithm and Wallace Tree

    NASA Astrophysics Data System (ADS)

    Yao, Aihong; Li, Ling; Sun, Mengzhe

    A design of 32*32 bit pipelined multiplier is presented in this paper. The proposed multiplier is based on the modified booth algorithm and Wallace tree structure. In order to improve the throughput rate of the multiplier, pipeline architecture is introduced to the Wallace tree. Carry Select Adder is deployed to reduce the propagation delay of carry signal for the final level 64-bit adder. The multiplier is fully implemented with Verilog HDL and synthesized successfully with Quartus II. The experiment result shows that the resource consumption and power consumption is reduced to 2560LE and 120mW, the operating frequency is improved from 136.21MHz to 165.07MHz.

  8. Designing Integrated Fuzzy Guidance Law for Aerodynamic Homing Missiles Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Omar, Hanafy M.

    The Fuzzy logic controller (FLC) is well-known for robustness to parameter variations and ability to reject noise. However, its design requires definition of many parameters. This work proposes a systematic and simple procedure to develop an integrated fuzzy-based guidance law which consists of three FLC. Each is activated in a region of the interception. Another fuzzy-based switching system is introduced to allow smooth transition between these controllers. The parameters of all the fuzzy controllers, which include the distribution of the membership functions and the rules, are obtained simply by observing the function of each controller. Furthermore, these parameters are tuned by genetic algorithms by solving an optimization problem to minimize the interception time, missile acceleration commands, and miss distance. The simulation results show that the proposed procedure can generate a guidance law with satisfactory performance.

  9. Designing Daily Patrol Routes for Policing Based on ANT Colony Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, H.; Cheng, T.; Wise, S.

    2015-07-01

    In this paper, we address the problem of planning police patrol routes to regularly cover street segments of high crime density (hotspots) with limited police forces. A good patrolling strategy is required to minimise the average time lag between two consecutive visits to hotspots, as well as coordinating multiple patrollers and imparting unpredictability in patrol routes. Previous studies have designed different police patrol strategies for routing police patrol, but these strategies have difficulty in generalising to real patrolling and meeting various requirements. In this research we develop a new police patrolling strategy based on Bayesian method and ant colony algorithm. In this strategy, virtual marker (pheromone) is laid to mark the visiting history of each crime hotspot, and patrollers continuously decide which hotspot to patrol next based on pheromone level and other variables. Simulation results using real data testifies the effective, scalable, unpredictable and extensible nature of this strategy.

  10. On extracting brightness temperature maps from scanning radiometer data. [techniques for algorithm design

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Garza-Robles, R.

    1980-01-01

    The extraction of brightness temperature maps from scanning radiometer data is described as a typical linear inverse problem. Spatial quantization and parameter estimation is described and is suggested as an advantageous approach to a solution. Since this approach takes into explicit account the multivariate nature of the problem, it permits an accurate determination of the most detailed resolution extractable from the data as well as explicitly defining the possible compromises between accuracy and resolution. To illustrate the usefulness of the method described for algorithm design and accuracy prediction, it was applied to the problem of providing brightness temperature maps during the NOSS flight segment. The most detained possible resolution was determined and a curve which displays the possible compromises between accuracy and resolution was provided.

  11. A millimeter wave image fusion algorithm design and optimization based on CDF97 wavelet transform

    NASA Astrophysics Data System (ADS)

    Yu, Jian-cheng; Chen, Bo-yang; Xia, A.-lin; Liu, Xin-guang

    2011-08-01

    Millimeter wave imaging technology provides a new detection method for security, fast and safe. But the wave of the images is its own shortcomings, such as noise and low sensitivity. Systems used for security, since only the corresponding specific objects to retain the information, and other information missing, so the actual image is difficult to locate in the millimeter wave . Image fusion approach can be used to effectively solve this problem. People usually use visible and millimeter-wave image fusion. The use of visible image contains the visual information. The fused image can be more convenient site for the detection of concealed weapons and to provide accurate positioning. The integration of information from different detectors, and there are different between the two levels of signal to noise ratio and pixel resolution, so traditional pixel-level fusion methods often cannot satisfy the fusion. Many experts and scholars apply wavelet transform approach to deal with some remote sensing image fusion, and the performance has been greatly improved. Due to these wavelet transform algorithm with complexity and large amount of computation, many algorithms are still in research stage. In order to improve the fusion performance and gain the real-time image fusion, an Integer Wavelet Transform CDF97 based with regional energy enhancement fusion algorithm is proposed in this paper. First, this paper studies of choice of wavelet operator. The paper invites several characteristics to evaluate the performance of wavelet operator used in image fusion. Results show that CDF97 wavelet fusion performance is better than traditional wavelet wavelets such as db wavelet, the vanishing moment longer the better. CDF97 wavelet has good energy concentration characteristic. The low frequency region of the transformed image contains almost the whole image energy. The target in millimeter wave image often has the low-pass characteristics and with a higher energy compare to the ambient

  12. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    SciTech Connect

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  13. Algorithms and theory for the design and programming of industrial control systems materialized with PLC's

    NASA Astrophysics Data System (ADS)

    Montoya Villena, Rafael

    According to its title, the general objective of the Thesis consists in developing a clear, simple and systematic methodology for programming type PLC devices. With this aim in mind, we will use the following elements: Codification of all variables types. This section is very important since it allows us working with little information. The necessary rules are given to codify all type of phrases produced in industrial processes. An algorithm that describes process evolution and that has been called process D.F. This is one of the most important contributions, since it will allow us, together with information codification, representing the process evolution in a graphic way and with any design theory used. Theory selection. Evidently, the use of some kind of design method is necessary to obtain logic equations. For this particular case, we will use binodal theory, an ideal theory for wired technologies, since it can obtain highly reduced schemas for relatively simple automatisms, which means a minimum number of components used. User program outline algorithm (D.F.P.). This is another necessary contribution and perhaps the most important one, since logic equations resulting from binodal theory are compatible with process evolution if wired technology is used, whether it is electric, electronic, pneumatic, etc. On the other hand, PLC devices performance characteristics force the program instructions order to validate or not the automatism, as we have proven in different articles and lectures at congresses both national and international. Therefore, we will codify any information concerning the automating process, graphically represent its temporal evolution and, applying binodal theory and D.F.P (previously adapted), succeed in making logic equations compatible with the process to be automated and the device in which they will be implemented (PLC in our case)

  14. Evolutionary Design of one-dimensional Rule Changing cellular automata using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Yun, Wu; Kanoh, Hitoshi

    In this paper we propose a new method to obtain transition rules of one-dimensional two-state cellular automata (CAs) using genetic algorithms (GAs). CAs have the advantages of producing complex systems from the interaction of simple elements, and have attracted increased research interest. However, the difficulty of designing CAs' transition rules to perform a particular task has severely limited their applications. The evolutionary design of CA rules has been studied by the EVCA group in detail. A GA was used to evolve CAs for two tasks: density classification and synchronization problems. That GA was shown to have discovered rules that gave rise to sophisticated emergent computational strategies. Sipper has studied a cellular programming algorithm for 2-state non-uniform CAs, in which each cell may contain a different rule. Meanwhile, Land and Belew proved that the perfect two-state rule for performing the density classification task does not exist. However, Fuks´ showed that a pair of human written rules performs the task perfectly when the size of neighborhood is one. In this paper, we consider a pair of rules and the number of rule iterations as a chromosome, whereas the EVCA group considers a rule as a chromosome. The present method is meant to reduce the complexity of a given problem by dividing the problem into smaller ones and assigning a distinct rule to each one. Experimental results for the two tasks prove that our method is more efficient than a conventional method. Some of the obtained rules agree with the human written rules shown by Fuks´. We also grouped 1000 rules with high fitness into 4 classes according to the Langton's λ parameter. The rules obtained by the proposed method belong to Class- I, II, III or IV, whereas most of the rules by the conventional method belong to Class-IV only. This result shows that the combination of simple rules can perform complex tasks.

  15. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins

    PubMed Central

    Afanasyev, Vsevolod; Buldyrev, Sergey V.; Dunn, Michael J.; Robst, Jeremy; Preston, Mark; Bremner, Steve F.; Briggs, Dirk R.; Brown, Ruth; Adlard, Stacey; Peat, Helen J.

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge’s accurate performance and demonstrates how its design is a significant improvement on existing systems. PMID:25894763

  16. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    PubMed

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems. PMID:25894763

  17. 40 CFR 55.15 - Specific designation of corresponding onshore areas.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Specific designation of corresponding onshore areas. 55.15 Section 55.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) OUTER CONTINENTAL SHELF AIR REGULATIONS § 55.15 Specific designation...

  18. 40 CFR 55.15 - Specific designation of corresponding onshore areas.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Specific designation of corresponding onshore areas. 55.15 Section 55.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) OUTER CONTINENTAL SHELF AIR REGULATIONS § 55.15 Specific designation of corresponding onshore areas. (a) California....

  19. GSP: A web-based platform for designing genome-specific primers in polyploids

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The sequences among subgenomes in a polyploid species have high similarity. This makes difficult to design genome-specific primers for sequence analysis. We present a web-based platform named GSP for designing genome-specific primers to distinguish subgenome sequences in the polyploid genome backgr...

  20. 78 FR 33863 - Relationship Between General Design Criteria and Technical Specification Operability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... this RIS in the Federal Register (77 FR 45282) on July 31, 2012. The agency received comments from two... COMMISSION Relationship Between General Design Criteria and Technical Specification Operability AGENCY... Relationship Between General Design Criteria and Technical Specification Operability.'' This RIS clarifies...

  1. General asymmetric neutral networks and structure design by genetic algorithms: A learning rule for temporal patterns

    SciTech Connect

    Bornholdt, S.; Graudenz, D.

    1993-07-01

    A learning algorithm based on genetic algorithms for asymmetric neural networks with an arbitrary structure is presented. It is suited for the learning of temporal patterns and leads to stable neural networks with feedback.

  2. Preliminary design of the Carrisa Plains solar central receiver power plant. Volume II. Plant specifications

    SciTech Connect

    Price, R. E.

    1983-12-31

    The specifications and design criteria for all plant systems and subsystems used in developing the preliminary design of Carrisa Plains 30-MWe Solar Plant are contained in this volume. The specifications have been organized according to plant systems and levels. The levels are arranged in tiers. Starting at the top tier and proceeding down, the specification levels are the plant, system, subsystem, components, and fabrication. A tab number, listed in the index, has been assigned each document to facilitate document location.

  3. Using adaptive genetic algorithms in the design of morphological filters in textural image processing

    NASA Astrophysics Data System (ADS)

    Li, Wei; Haese-Coat, Veronique; Ronsin, Joseph

    1996-03-01

    An adaptive GA scheme is adopted for the optimal morphological filter design problem. The adaptive crossover and mutation rate which make the GA avoid premature and at the same time assure convergence of the program are successfully used in optimal morphological filter design procedure. In the string coding step, each string (chromosome) is composed of a structuring element coding chain concatenated with a filter sequence coding chain. In decoding step, each string is divided into 3 chains which then are decoded respectively into one structuring element with a size inferior to 5 by 5 and two concatenating morphological filter operators. The fitness function in GA is based on the mean-square-error (MSE) criterion. In string selection step, a stochastic tournament procedure is used to replace the simple roulette wheel program in order to accelerate the convergence. The final convergence of our algorithm is reached by a two step converging strategy. In presented applications of noise removal from texture images, it is found that with the optimized morphological filter sequences, the obtained MSE values are smaller than those using corresponding non-adaptive morphological filters, and the optimized shapes and orientations of structuring elements take approximately the same shapes and orientations as those of the image textons.

  4. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  5. SU-E-T-316: The Design of a Risk Index Method for 3D Patient Specific QA

    SciTech Connect

    Cho, W; Wu, H; Xing, L; Suh, T

    2014-06-01

    Purpose: To suggest a new guidance for the evaluation of 3D patient specific QA, a structure-specific risk-index (RI) method was designed and implemented. Methods: A new algorithm was designed to assign the score of Pass, Fail or Pass with Risk to all 3D voxels in each structure by improving a conventional Gamma Index (GI) algorithm, which implied the degree of the risk of under-dose to the treatment target or over-dose to the organ at risks (OAR). Structure-specific distance to agreement (DTOA), dose difference and minimum checkable dose were applied to the GI algorithm, and additional parameters such as dose gradient factor and dose limit of structures were used to the RI method. Maximum passing rate (PR) and minimum PR were designed and calculated for each structure with the RI method. 3D doses were acquired from a spine SBRT plan by simulating the shift of beam iso-center, and tested to show the feasibility of the suggested method. Results: When the iso-center was shifted by 1 mm, 2 mm, and 3 mm, the PR of conventional GI method between shifted and non-shifted 3D doses were 99.9%, 97.4%, and 89.7% for PTV, 99.8%, 84.8%, and 63.2% for spinal cord, and 100%, 99.5%, 91.7% for right lung. The minimum PRs from the RI method were 98.9%, 96.9%, and 89.5% for PTV, and 96.1%, 79.3%, 57.5% for spinal cord, and 92.5%, 92.0%, 84.4% for right lung, respectively. The maximum PRs from the RI method were equal or less than the PRs from the conventional GI evaluation. Conclusion: Designed 3D RI method showed more strict acceptance level than the conventional GI method, especially for OARs. The RI method is expected to give the degrees of risks in the delivered doses, as well as the degrees of agreements between calculated 3D doses and measured (or simulated) 3D doses.

  6. Design of an IRFPA nonuniformity correction algorithm to be implemented as a real-time hardware prototype

    NASA Astrophysics Data System (ADS)

    Fenner, Jonathan W.; Simon, Solomon H.; Eden, Dayton D.

    1994-07-01

    As new IR focal plane array (IRFPA) technologies become available, improved methods for coping with array errors must be developed. Traditional methods of nonuniformity correction using simple calibration mode are not adequate to compensate for the inherent nonuniformity and 1/f noise in some arrays. In an effort to compensate for nonuniformity in a HgCdTe IRFPA, and to reduce the effects of 1/f noise over a time interval, a new dynamic neural network (NN) based algorithm was implemented. The algorithm compensates for nonuniformities, and corrects for 1/f noise. A gradient descent algorithm is used with nearest neighbor feedback for training, creating a dynamic model of the IRFPA's gains and offsets, then updating and correcting them continuously. Improvements to the NN include implementation on a IBM 486 computer system, and a close examination of simulated scenes to test the algorithms boundaries. Preliminary designs for a real-time hardware prototype have been developed as well. Simulations were implemented to test the algorithm's ability to correct under a variety of conditions. A wide range of background noise, 1/f noise, object intensities, and background intensities were used. Results indicate that this algorithm can correct efficiently down to the background noise. Our conclusions are that NN based adaptive algorithms will supplement the effectiveness of IRFPA's.

  7. Selection of pairings reaching evenly across the data (SPREAD): A simple algorithm to design maximally informative fully crossed mating experiments.

    PubMed

    Zimmerman, K; Levitis, D; Addicott, E; Pringle, A

    2016-02-01

    We present a novel algorithm for the design of crossing experiments. The algorithm identifies a set of individuals (a 'crossing-set') from a larger pool of potential crossing-sets by maximizing the diversity of traits of interest, for example, maximizing the range of genetic and geographic distances between individuals included in the crossing-set. To calculate diversity, we use the mean nearest neighbor distance of crosses plotted in trait space. We implement our algorithm on a real dataset of Neurospora crassa strains, using the genetic and geographic distances between potential crosses as a two-dimensional trait space. In simulated mating experiments, crossing-sets selected by our algorithm provide better estimates of underlying parameter values than randomly chosen crossing-sets. PMID:26419337

  8. BWM*: A Novel, Provable, Ensemble-based Dynamic Programming Algorithm for Sparse Approximations of Computational Protein Design.

    PubMed

    Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R

    2016-06-01

    Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem. PMID:26744898

  9. VLSI design of inverse-free Berlekamp-Massey algorithm for Reed-Solomon code

    NASA Astrophysics Data System (ADS)

    Truong, Trieu-Kien; Chang, Y. W.; Jeng, Jyh H.

    2001-11-01

    The inverse-free Berlekamp-Massey (BM) algorithm is the simplest technique for Reed-Solomon (RS) code to correct errors. In the decoding process, the BM algorithm is used to find the error locator polynomial with syndromes as the input. Later, the inverse-free BM algorithm is generalized to find the error locator polynomial with given erasure locator polynomial. By this means, the modified algorithm can be used for RS code to correct both errors and erasures. The improvement is achieved by replacing the input of the Berlekamp-Massey algorithm with the Forney syndromes instead of the syndromes. With this improved technique, the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. In this paper, the register transfer language of this modified BM algorithm is derived and the VLSI architecture is presented.

  10. Genetic Algorithm for Innovative Device Designs in High-Efficiency III–V Nitride Light-Emitting Diodes

    SciTech Connect

    Zhu, Di; Schubert, Martin F.; Cho, Jaehee; Schubert, E. Fred; Crawford, Mary H.; Koleske, Daniel D.; Shim, Hyunwook; Sone, Cheolsoo

    2012-01-01

    Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III–V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.

  11. Experimental design for estimating unknown hydraulic conductivity in an aquifer using a genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2015-12-01

    We develop an experimental design algorithm to select locations for a network of observation wells that provide the maximum robust information about unknown hydraulic conductivity in a confined, anisotropic aquifer. Since the information that a design provides is dependent on an aquifer's hydraulic conductivity, a robust design is one that provides the maximum information in the worst-case scenario. The design can be formulated as a max-min optimization problem. The problem is generally non-convex, non-differentiable, and contains integer variables. We use a Genetic Algorithm (GA) to perform the combinatorial search. We employ proper orthogonal decomposition (POD) to reduce the dimension of the groundwater model, thereby reducing the computational burden posed by employing a GA. The GA algorithm exhaustively searches for the robust design across a set of hydraulic conductivities and finds an approximate design (called the High Frequency Observation Well Design) through a Monte Carlo-type search. The results from a small-scale 1-D test case validate the proposed methodology. We then apply the methodology to a realistically-scaled 2-D test case.

  12. Coupling Optimization Design of Aspirated Compressor Airfoil and Aspirated Scheme Based on Artificial Bee Colony Algorithm and CST Method

    NASA Astrophysics Data System (ADS)

    Li, Jun; Liu, Bo; Zhao, Yan; Yang, Xiaodong; Lu, Xiaofeng; Wang, Lei

    2015-04-01

    This paper focuses on creating a new design method optimizing both aspirated compressor airfoil and the aspiration scheme simultaneously. The optimization design method is based on the artificial bee colony algorithm and the CST method, while the flow field is computed by one 2D computational program. The optimization process of the rotor tip and stator tip airfoil from an aspirated fan stage is demonstrated to verify the effectiveness of the new coupling method. The results show that the total pressure losses of the optimized stator tip and rotor tip airfoil are reduced relatively by 54% and 20%, respectively. Artificial bee colony algorithm and CST method indicate a satisfying applicability in aspirated airfoil optimization design. Finally, the features of aspirated airfoil designing process are concluded.

  13. Design of an iterative auto-tuning algorithm for a fuzzy PID controller

    NASA Astrophysics Data System (ADS)

    Saeed, Bakhtiar I.; Mehrdadi, B.

    2012-05-01

    Since the first application of fuzzy logic in the field of control engineering, it has been extensively employed in controlling a wide range of applications. The human knowledge on controlling complex and non-linear processes can be incorporated into a controller in the form of linguistic terms. However, with the lack of analytical design study it is becoming more difficult to auto-tune controller parameters. Fuzzy logic controller has several parameters that can be adjusted, such as: membership functions, rule-base and scaling gains. Furthermore, it is not always easy to find the relation between the type of membership functions or rule-base and the controller performance. This study proposes a new systematic auto-tuning algorithm to fine tune fuzzy logic controller gains. A fuzzy PID controller is proposed and applied to several second order systems. The relationship between the closed-loop response and the controller parameters is analysed to devise an auto-tuning method. The results show that the proposed method is highly effective and produces zero overshoot with enhanced transient response. In addition, the robustness of the controller is investigated in the case of parameter changes and the results show a satisfactory performance.

  14. Design of electrocardiography measurement system with an algorithm to remove noise

    NASA Astrophysics Data System (ADS)

    Kwon, Hyeokjun; Oh, Sechang; Kumar, Prashanth; Varadan, Vijay K.

    2011-04-01

    Electrocardiography (ECG) is an important diagnostic tool that can provide vital information about diseases that may not be detectable with other biological signals like, SpO2(Oxygen Saturation), pulse rate, respiration, and blood pressure. For this reason, EKG measurement is mandatory for accurate diagnosis. Recent development in information technology has facilitated remote monitoring systems which can check patient's current status. Moreover, remote monitoring systems can obviate the need for patients to go to hospitals periodically. Such representative wireless communication system is Zigbee sensor network because Zigbee sensor network provides low power consumption and multi-device connection. When we measure EKG signal, another important factor that we should consider is about unexpected signals mixed to EKG signal. The unexpected signals give a severe impact in distorting original EKG signal. There are three kinds of types in noise elements such as muscle noise, movement noise, and respiration noise. This paper describes the design method for EKG measurement system with Zigbee sensor network and proposes an algorithm to remove noises from measured ECG signal.

  15. Towards the optimal design of an uncemented acetabular component using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Ghosh, Rajesh; Pratihar, Dilip Kumar; Gupta, Sanjay

    2015-12-01

    Aseptic loosening of the acetabular component (hemispherical socket of the pelvic bone) has been mainly attributed to bone resorption and excessive generation of wear particle debris. The aim of this study was to determine optimal design parameters for the acetabular component that would minimize bone resorption and volumetric wear. Three-dimensional finite element models of intact and implanted pelvises were developed using data from computed tomography scans. A multi-objective optimization problem was formulated and solved using a genetic algorithm. A combination of suitable implant material and corresponding set of optimal thicknesses of the component was obtained from the Pareto-optimal front of solutions. The ultra-high-molecular-weight polyethylene (UHMWPE) component generated considerably greater volumetric wear but lower bone density loss compared to carbon-fibre reinforced polyetheretherketone (CFR-PEEK) and ceramic. CFR-PEEK was located in the range between ceramic and UHMWPE. Although ceramic appeared to be a viable alternative to cobalt-chromium-molybdenum alloy, CFR-PEEK seems to be the most promising alternative material.

  16. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  17. Designing mixed metal halide ammines for ammonia storage using density functional theory and genetic algorithms.

    PubMed

    Jensen, Peter Bjerre; Lysgaard, Steen; Quaade, Ulrich J; Vegge, Tejs

    2014-09-28

    Metal halide ammines have great potential as a future, high-density energy carrier in vehicles. So far known materials, e.g. Mg(NH3)6Cl2 and Sr(NH3)8Cl2, are not suitable for automotive, fuel cell applications, because the release of ammonia is a multi-step reaction, requiring too much heat to be supplied, making the total efficiency lower. Here, we apply density functional theory (DFT) calculations to predict new mixed metal halide ammines with improved storage capacities and the ability to release the stored ammonia in one step, at temperatures suitable for system integration with polymer electrolyte membrane fuel cells (PEMFC). We use genetic algorithms (GAs) to search for materials containing up to three different metals (alkaline-earth, 3d and 4d) and two different halides (Cl, Br and I) - almost 27,000 combinations, and have identified novel mixtures, with significantly improved storage capacities. The size of the search space and the chosen fitness function make it possible to verify that the found candidates are the best possible candidates in the search space, proving that the GA implementation is ideal for this kind of computational materials design, requiring calculations on less than two percent of the candidates to identify the global optimum. PMID:25115581

  18. Combining Interactive Infrastructure Modeling and Evolutionary Algorithm Optimization for Sustainable Water Resources Design

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Zagona, E. A.

    2013-12-01

    Population growth and climate change, combined with difficulties in building new infrastructure, motivate portfolio-based solutions to ensuring sufficient water supply. Powerful simulation models with graphical user interfaces (GUI) are often used to evaluate infrastructure portfolios; these GUI based models require manual modification of the system parameters, such as reservoir operation rules, water transfer schemes, or system capacities. Multiobjective evolutionary algorithm (MOEA) based optimization can be employed to balance multiple objectives and automatically suggest designs for infrastructure systems, but MOEA based decision support typically uses a fixed problem formulation (i.e., a single set of objectives, decisions, and constraints). This presentation suggests a dynamic framework for linking GUI-based infrastructure models with MOEA search. The framework begins with an initial formulation which is solved using a MOEA. Then, stakeholders can interact with candidate solutions, viewing their properties in the GUI model. This is followed by changes in the formulation which represent users' evolving understanding of exigent system properties. Our case study is built using RiverWare, an object-oriented, data-centered model that facilitates the representation of a diverse array of water resources systems. Results suggest that assumptions within the initial MOEA search are violated after investigating tradeoffs and reveal how formulations should be modified to better capture stakeholders' preferences.

  19. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  20. Practical Findings from Applying the PSD Model for Evaluating Software Design Specifications

    NASA Astrophysics Data System (ADS)

    Räisänen, Teppo; Lehto, Tuomas; Oinas-Kukkonen, Harri

    This paper presents practical findings from applying the PSD model to evaluating the support for persuasive features in software design specifications for a mobile Internet device. On the one hand, our experiences suggest that the PSD model fits relatively well for evaluating design specifications. On the other hand, the model would benefit from more specific heuristics for evaluating each technique to avoid unnecessary subjectivity. Better distinction between the design principles in the social support category would also make the model easier to use. Practitioners who have no theoretical background can apply the PSD model to increase the persuasiveness of the systems they design. The greatest benefit of the PSD model for researchers designing new systems may be achieved when it is applied together with a sound theory, such as the Elaboration Likelihood Model. Using the ELM together with the PSD model, one may increase the chances for attitude change.

  1. Efficient design method for cell allocation in hybrid CMOS/nanodevices using a cultural algorithm with chaotic behavior

    NASA Astrophysics Data System (ADS)

    Pan, Zhong-Liang; Chen, Ling; Zhang, Guang-Zhao

    2016-04-01

    The hybrid CMOS molecular (CMOL) circuit, which combines complementary metal-oxide-semiconductor (CMOS) components with nanoscale wires and switches, can exhibit significantly improved performance. In CMOL circuits, the nanodevices, which are called cells, should be placed appropriately and are connected by nanowires. The cells should be connected such that they follow the shortest path. This paper presents an efficient method of cell allocation in CMOL circuits with the hybrid CMOS/nanodevice structure; the method is based on a cultural algorithm with chaotic behavior. The optimal model of cell allocation is derived, and the coding of an individual representing a cell allocation is described. Then the cultural algorithm with chaotic behavior is designed to solve the optimal model. The cultural algorithm consists of a population space, a belief space, and a protocol that describes how knowledge is exchanged between the population and belief spaces. In this paper, the evolutionary processes of the population space employ a genetic algorithm in which three populations undergo parallel evolution. The evolutionary processes of the belief space use a chaotic ant colony algorithm. Extensive experiments on cell allocation in benchmark circuits showed that a low area usage can be obtained using the proposed method, and the computation time can be reduced greatly compared to that of a conventional genetic algorithm.

  2. QXP: powerful, rapid computer algorithms for structure-based drug design.

    PubMed

    McMartin, C; Bohacek, R S

    1997-07-01

    New methods for docking, template fitting and building pseudo-receptors are described. Full conformational searches are carried out for flexible cyclic and acyclic molecules. QXP (quick explore) search algorithms are derived from the method of Monte Carlo perturbation with energy minimization in Cartesian space. An additional fast search step is introduced between the initial perturbation and energy minimization. The fast search produces approximate low-energy structures, which are likely to minimize to a low energy. For template fitting, QXP uses a superposition force field which automatically assigns short-range attractive forces to similar atoms in different molecules. The docking algorithms were evaluated using X-ray data for 12 protein-ligand complexes. The ligands had up to 24 rotatable bonds and ranged from highly polar to mostly nonpolar. Docking searches of the randomly disordered ligands gave rms differences between the lowest energy docked structure and the energy-minimized X-ray structure, of less than 0.76 A for 10 of the ligands. For all the ligands, the rms difference between the energy-minimized X-ray structure and the closest docked structure was less than 0.4 A, when parts of one of the molecules which are in the solvent were excluded from the rms calculation. Template fitting was tested using four ACE inhibitors. Three ACE templates have been previously published. A single run using QXP generated a series of templates which contained examples of each of the three. A pseudo-receptor, complementary to an ACE template, was built out of small molecules, such as pyrrole, cyclopentanone and propane. When individually energy minimized in the pseudo-receptor, each of the four ACE inhibitors moved with an rms of less than 0.25 A. After random perturbation, the inhibitors were docked into the pseudo-receptor. Each lowest energy docked structure matched the energy-minimized geometry with an rms of less than 0.08 A. Thus, the pseudo-receptor shows steric and

  3. HAL/SM system functional design specification. [systems analysis and design analysis of central processing units

    NASA Technical Reports Server (NTRS)

    Ross, C.; Williams, G. P. W., Jr.

    1975-01-01

    The functional design of a preprocessor, and subsystems is described. A structure chart and a data flow diagram are included for each subsystem. Also a group of intermodule interface definitions (one definition per module) is included immediately following the structure chart and data flow for a particular subsystem. Each of these intermodule interface definitions consists of the identification of the module, the function the module is to perform, the identification and definition of parameter interfaces to the module, and any design notes associated with the module. Also described are compilers and computer libraries.

  4. Designing, Visualizing, and Discussing Algorithms within a CS 1 Studio Experience: An Empirical Study

    ERIC Educational Resources Information Center

    Hundhausen, Christopher D.; Brown, Jonathan L.

    2008-01-01

    Within the context of an introductory CS1 unit on algorithmic problem-solving, we are exploring the pedagogical value of a novel active learning activity--the "studio experience"--that actively engages learners with algorithm visualization technology. In a studio experience, student pairs are tasked with (a) developing a solution to an algorithm…

  5. Automatic generation of conceptual database design tools from data model specifications

    SciTech Connect

    Hong, Shuguang.

    1989-01-01

    The problems faced in the design and implementation of database software systems based on object-oriented data models are similar to that of other software design, i.e., difficult, complex, yet redundant effort. Automatic generation of database software system has been proposed as a solution to the problems. In order to generate database software system for a variety of object-oriented data models, two critical issues: data model specification and software generation, must be addressed. SeaWeed is a software system that automatically generates conceptual database design tools from data model specifications. A meta model has been defined for the specification of a class of object-oriented data models. This meta model provides a set of primitive modeling constructs that can be used to express the semantics, or unique characteristics, of specific data models. Software reusability has been adopted for the software generation. The technique of design reuse is utilized to derive the requirement specification of the software to be generated from data model specifications. The mechanism of code reuse is used to produce the necessary reusable software components. This dissertation presents the research results of SeaWeed including the meta model, data model specification, a formal representation of design reuse and code reuse, and the software generation paradigm.

  6. MEPSA: A flexible peak search algorithm designed for uniformly spaced time series

    NASA Astrophysics Data System (ADS)

    Guidorzi, C.

    2015-04-01

    We present a novel algorithm aimed at identifying peaks within a uniformly sampled time series affected by uncorrelated Gaussian noise. The algorithm, called "MEPSA" (multiple excess peak search algorithm), essentially scans the time series at different timescales by comparing a given peak candidate with a variable number of adjacent bins. While this has originally been conceived for the analysis of gamma-ray burst light (GRB) curves, its usage can be readily extended to other astrophysical transient phenomena, whose activity is recorded through different surveys. We tested and validated it through simulated featureless profiles as well as simulated GRB time profiles. We showcase the algorithm's potential by comparing with the popular algorithm by Li and Fenimore, that is frequently adopted in the literature. Thanks to its high flexibility, the mask of excess patterns used by MEPSA can be tailored and optimised to the kind of data to be analysed without modifying the code. The C code is made publicly available.

  7. Multiple expression of molecular information: enforced generation of different supramolecular inorganic architectures by processing of the same ligand information through specific coordination algorithms

    PubMed

    Funeriu; Lehn; Fromm; Fenske

    2000-06-16

    The multisubunit ligand 2 combines two complexation substructures known to undergo, with specific metal ions, distinct self-assembly processes to form a double-helical and a grid-type structure, respectively. The binding information contained in this molecular strand may be expected to generate, in a strictly predetermined and univocal fashion, two different, well-defined output inorganic architectures depending on the set of metal ions, that is, on the coordination algorithm used. Indeed, as predicted, the self-assembly of 2 with eight CuII and four CuI yields the intertwined structure D1. It results from a crossover of the two assembly subprograms and has been fully characterized by crystal structure determination. On the other hand, when the instructions of strand 2 are read out with a set of eight CuI and four MII (M = Fe, Co, Ni, Cu) ions, the architectures C1-C4, resulting from a linear combination of the two subprograms, are obtained, as indicated by the available physico-chemical and spectral data. Redox interconversion of D1 and C4 has been achieved. These results indicate that the same molecular information may yield different output structures depending on how it is processed, that is, depending on the interactional (coordination) algorithm used to read it. They have wide implications for the design and implementation of programmed chemical systems, pointing towards multiprocessing capacity, in a one code/ several outputs scheme, of potential significance for molecular computation processes and possibly even with respect to information processing in biology. PMID:10926214

  8. A domain-specific design architecture for composite material design and aircraft part redesign

    NASA Technical Reports Server (NTRS)

    Punch, W. F., III; Keller, K. J.; Bond, W.; Sticklen, J.

    1992-01-01

    Advanced composites have been targeted as a 'leapfrog' technology that would provide a unique global competitive position for U.S. industry. Composites are unique in the requirements for an integrated approach to designing, manufacturing, and marketing of products developed utilizing the new materials of construction. Numerous studies extending across the entire economic spectrum of the United States from aerospace to military to durable goods have identified composites as a 'key' technology. In general there have been two approaches to composite construction: build models of a given composite materials, then determine characteristics of the material via numerical simulation and empirical testing; and experience-directed construction of fabrication plans for building composites with given properties. The first route sets a goal to capture basic understanding of a device (the composite) by use of a rigorous mathematical model; the second attempts to capture the expertise about the process of fabricating a composite (to date) at a surface level typically expressed in a rule based system. From an AI perspective, these two research lines are attacking distinctly different problems, and both tracks have current limitations. The mathematical modeling approach has yielded a wealth of data but a large number of simplifying assumptions are needed to make numerical simulation tractable. Likewise, although surface level expertise about how to build a particular composite may yield important results, recent trends in the KBS area are towards augmenting surface level problem solving with deeper level knowledge. Many of the relative advantages of composites, e.g., the strength:weight ratio, is most prominent when the entire component is designed as a unitary piece. The bottleneck in undertaking such unitary design lies in the difficulty of the re-design task. Designing the fabrication protocols for a complex-shaped, thick section composite are currently very difficult. It is in

  9. Site-specific control of silica mineralization on DNA using a designed peptide.

    PubMed

    Ozaki, Makoto; Nagai, Kazuma; Nishiyama, Hiroto; Tsuruoka, Takaaki; Fujii, Satoshi; Endoh, Tamaki; Imai, Takahito; Tomizaki, Kin-Ya; Usui, Kenji

    2016-03-01

    We developed a site-specific method for precipitating inorganic compounds using organic compounds, DNA, and designed peptides with peptide nucleic acids (PNAs). Such a system for site-specific precipitation represents a powerful tool for use in nanobiochemistry and materials chemistry. PMID:26690695

  10. 12 CFR 1815.104 - Specific responsibilities of the designated Fund official.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Specific responsibilities of the designated Fund official. 1815.104 Section 1815.104 Banks and Banking COMMUNITY DEVELOPMENT FINANCIAL INSTITUTIONS FUND, DEPARTMENT OF THE TREASURY ENVIRONMENTAL QUALITY § 1815.104 Specific responsibilities of...

  11. 12 CFR 1815.104 - Specific responsibilities of the designated Fund official.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Specific responsibilities of the designated Fund official. 1815.104 Section 1815.104 Banks and Banking COMMUNITY DEVELOPMENT FINANCIAL INSTITUTIONS FUND, DEPARTMENT OF THE TREASURY ENVIRONMENTAL QUALITY § 1815.104 Specific responsibilities of...

  12. Single Event Testing on Complex Devices: Test Like You Fly Versus Test-Specific Design Structures

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth

    2016-01-01

    We present a mechanism for evaluating complex digital systems targeted for harsh radiation environments such as space. Focus is limited to analyzing the single event upset (SEU) susceptibility of designs implemented inside Field Programmable Gate Array (FPGA) devices. Tradeoffs are provided between application-specific versus test-specific test structures.

  13. Neural signal processing and closed-loop control algorithm design for an implanted neural recording and stimulation system.

    PubMed

    Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N

    2015-08-01

    A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed

  14. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 5: Specification for EROS operations control center

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The functional, performance, and design requirements for the Operations Control Center (OCC) of the Earth Observatory Satellite (EOS) system are presented. The OCC controls the operations of the EOS satellite to acquire mission data consisting of: (1) thematic mapper data, (2) multispectral scanner data on EOS-A, or High Resolution Pointable Imager data on EOS-B, and (3) data collection system (DCS) data. The various inputs to the OCC are identified. The functional requirements of the OCC are defined. The specific systems and subsystems of the OCC are described and block diagrams are provided.

  15. Better Educational Website Interface Design: The Implications from Gender-Specific Preferences in Graduate Students

    ERIC Educational Resources Information Center

    Hsu, Yu-chang

    2006-01-01

    This study investigated graduate students gender-specific preferences for certain website interface design features, intending to generate useful information for instructors in choosing and for website designers in creating educational websites. The features investigated in this study included colour value, major navigation buttons placement, and…

  16. 48 CFR 2052.227-70 - Drawings, designs, specifications, and other data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Drawings, designs... Clauses 2052.227-70 Drawings, designs, specifications, and other data. As prescribed at 2027.305-70, the contracting officer shall insert the following clause in all solicitations and contracts in which...

  17. 48 CFR 2052.227-70 - Drawings, designs, specifications, and other data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Drawings, designs... Clauses 2052.227-70 Drawings, designs, specifications, and other data. As prescribed at 2027.305-70, the contracting officer shall insert the following clause in all solicitations and contracts in which...

  18. 48 CFR 2052.227-70 - Drawings, designs, specifications, and other data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Drawings, designs... Clauses 2052.227-70 Drawings, designs, specifications, and other data. As prescribed at 2027.305-70, the contracting officer shall insert the following clause in all solicitations and contracts in which...

  19. 48 CFR 2052.227-70 - Drawings, designs, specifications, and other data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Drawings, designs... Clauses 2052.227-70 Drawings, designs, specifications, and other data. As prescribed at 2027.305-70, the contracting officer shall insert the following clause in all solicitations and contracts in which...

  20. Improving Students' Conceptual Understanding of a Specific Content Learning: A Designed Teaching Sequence

    ERIC Educational Resources Information Center

    Ahmad, N. J.; Lah, Y. Che

    2012-01-01

    The efficacy of a teaching sequence designed for a specific content of learning of electrochemistry is described in this paper. The design of the teaching draws upon theoretical insights into perspectives on learning and empirical studies to improve the teaching of this topic. A case study involving two classes, the experimental and baseline…

  1. Design of a Four-Element, Hollow-Cube Corner Retroreflector for Satellites by use of a Genetic Algorithm.

    PubMed

    Minato, A; Sugimoto, N

    1998-01-20

    A four-element retroreflector was designed for satellite laser ranging and Earth-satellite-Earth laser long-path absorption measurement of the atmosphere. The retroreflector consists of four symmetrically located corner retroreflectors. Each retroreflector element has curved mirrors and tuned dihedral angles to correct velocity aberrations. A genetic algorithm was employed to optimize dihedral angles of each element and the directions of the four elements. The optimized four-element retroreflector has high reflectance with a reasonably broad angular coverage. It is also shown that the genetic algorithm is effective for optimizing optics with many parameters. PMID:18268603

  2. Laser communication experiment. Volume 1: Design study report: Spacecraft transceiver. Part 3: LCE design specifications

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The requirements for the design, fabrication, performance, and testing of a 10.6 micron optical heterodyne receiver subsystem for use in a laser communication system are presented. The receiver subsystem, as a part of the laser communication experiment operates in the ATS 6 satellite and in a transportable ground station establishing two-way laser communications between the spacecraft and the transportable ground station. The conditions under which environmental tests are conducted are reported.

  3. Optimal groundwater remediation design of pump and treat systems via a simulation-optimization approach and firefly algorithm

    NASA Astrophysics Data System (ADS)

    Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard

    2015-01-01

    In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.

  4. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  5. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    PubMed

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening. PMID:25996728

  6. Design and optimization of pulsed Chemical Exchange Saturation Transfer MRI using a multiobjective genetic algorithm

    PubMed Central

    Yoshimaru, Eriko S.; Randtke, Edward A.; Pagel, Mark D.; Cárdenas-Rodríguez, Julio

    2016-01-01

    Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners. PMID:26778301

  7. Design and optimization of pulsed Chemical Exchange Saturation Transfer MRI using a multiobjective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yoshimaru, Eriko S.; Randtke, Edward A.; Pagel, Mark D.; Cárdenas-Rodríguez, Julio

    2016-02-01

    Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners.

  8. Designing specific protein–protein interactions using computation, experimental library screening, or integrated methods

    PubMed Central

    Chen, T Scott; Keating, Amy E

    2012-01-01

    Given the importance of protein–protein interactions for nearly all biological processes, the design of protein affinity reagents for use in research, diagnosis or therapy is an important endeavor. Engineered proteins would ideally have high specificities for their intended targets, but achieving interaction specificity by design can be challenging. There are two major approaches to protein design or redesign. Most commonly, proteins and peptides are engineered using experimental library screening and/or in vitro evolution. An alternative approach involves using protein structure and computational modeling to rationally choose sequences predicted to have desirable properties. Computational design has successfully produced novel proteins with enhanced stability, desired interactions and enzymatic function. Here we review the strengths and limitations of experimental library screening and computational structure-based design, giving examples where these methods have been applied to designing protein interaction specificity. We highlight recent studies that demonstrate strategies for combining computational modeling with library screening. The computational methods provide focused libraries predicted to be enriched in sequences with the properties of interest. Such integrated approaches represent a promising way to increase the efficiency of protein design and to engineer complex functionality such as interaction specificity. PMID:22593041

  9. Asymmetric cryptosystem and software design based on two-step phase-shifting interferometry and elliptic curve algorithm

    NASA Astrophysics Data System (ADS)

    Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2013-11-01

    We propose an asymmetric cryptosystem based on two-step phase-shifting interferometry (PSI) and elliptic curve (EC) public-key cryptographic algorithm, in which one image is encrypted to two interferograms by double random-phase encoding (DRPE) in Fresnel domain and two-step PSI, and the session keys such as geometrical parameters and pseudo-random seeds, are asymmetrically encoded and decoded with the aid of EC algorithm. The receiver, who possesses the corresponding private key generated by EC algorithm, can successfully decipher the transmitted data using the extracted session keys. The utilization of EC asymmetric cryptosystem solves the problem of key management and dispatch, which is inevitable in the conventional optical symmetric cryptosystems. Not only computer simulation, but also software design and development are carried out to verify the feasibility of the proposed cryptosystem.

  10. Design of a Broadband Electrical Impedance Matching Network for Piezoelectric Ultrasound Transducers Based on a Genetic Algorithm

    PubMed Central

    An, Jianfei; Song, Kezhu; Zhang, Shuangxi; Yang, Junfeng; Cao, Ping

    2014-01-01

    An improved method based on a genetic algorithm (GA) is developed to design a broadband electrical impedance matching network for piezoelectric ultrasound transducer. A key feature of the new method is that it can optimize both the topology of the matching network and perform optimization on the components. The main idea of this method is to find the optimal matching network in a set of candidate topologies. Some successful experiences of classical algorithms are absorbed to limit the size of the set of candidate topologies and greatly simplify the calculation process. Both binary-coded GA and real-coded GA are used for topology optimization and components optimization, respectively. Some calculation strategies, such as elitist strategy and clearing niche method, are adopted to make sure that the algorithm can converge to the global optimal result. Simulation and experimental results prove that matching networks with better performance might be achieved by this improved method. PMID:24743156

  11. Optimized design on condensing tubes high-speed TIG welding technology magnetic control based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Lin; Chang, Yunlong; Li, Yingmin; Lu, Ming

    2013-05-01

    An orthogonal experiment was conducted by the means of multivariate nonlinear regression equation to adjust the influence of external transverse magnetic field and Ar flow rate on welding quality in the process of welding condenser pipe by high-speed argon tungsten-arc welding (TIG for short). The magnetic induction and flow rate of Ar gas were used as optimum variables, and tensile strength of weld was set to objective function on the base of genetic algorithm theory, and then an optimal design was conducted. According to the request of physical production, the optimum variables were restrained. The genetic algorithm in the MATLAB was used for computing. A comparison between optimum results and experiment parameters was made. The results showed that the optimum technologic parameters could be chosen by the means of genetic algorithm with the conditions of excessive optimum variables in the process of high-speed welding. And optimum technologic parameters of welding coincided with experiment results.

  12. Optimal Design of a 3-Leg 6-DOF Parallel Manipulator for a Specific Workspace

    NASA Astrophysics Data System (ADS)

    Fu, Jianxun; Gao, Feng

    2016-04-01

    Researchers seldom study optimum design of a six-degree-of-freedom(DOF) parallel manipulator with three legs based upon the given workspace. An optimal design method of a novel three-leg six-DOF parallel manipulator(TLPM) is presented. The mechanical structure of this robot is introduced, with this structure the kinematic constrain equations is decoupled. Analytical solutions of the forward kinematics are worked out, one configuration of this robot, including position and orientation of the end-effector are graphically displayed. Then, on the basis of several extreme positions of the kinematic performances, the task workspace is given. An algorithm of optimal designing is introduced to find the smallest dimensional parameters of the proposed robot. Examples illustrate the design results, and a design stability index is introduced, which ensures that the robot remains a safe distance from the boundary of sits actual workspace. Finally, one prototype of the robot is developed based on this method. This method can easily find appropriate kinematic parameters that can size a robot having the smallest workspace enclosing a predefined task workspace. It improves the design efficiency, ensures that the robot has a small mechanical size possesses a large given workspace volume, and meets the lightweight design requirements.

  13. The progress of Chinese Carbon Dioxide Satellite (TanSat): observation design, Retrieval algorithm and validation network

    NASA Astrophysics Data System (ADS)

    Liu, Y.

    2014-12-01

    The Chinese carbon dioxide observation satellite (TanSat) project is the national high technology research and development program. It is funded by the ministry of science and technology of the people's republic of China and the Chinese Academy of Sciences. TanSat will monitor carbon dioxide in Sun-Synchronous orbit by a hyper resolution grating spectrometer - Carbon Dioxide Sensor. A wide field of view moderate resolution imaging spectrometer - Cloud and Aerosol Polarization Imager (CAPI) will measure the aerosol and cloud properties synchronously. TanSat project turned to Critical Design Phase after Preliminary Design Review on June 2013, and it plan to finish Critical Design Review on December 2014 and launch on July 2016. A multi-bands retrieval algorithm has been developed to approach XCO2 with applying O2A band observation to reduce aerosol and cirrus cloud influence. The state vector list has been modified from previous two-bands algorithm by adding aerosol model parameters, cirrus cloud model parameters and linear correction on O2A bands. Application of TanSat XCO2 retrieval Algorithm on GOSAT Observation (ATANGO) has been developed from multi-bands TanSat algorithm. GOSAT observation has been used in retrieval experiment of ATANGO. A preliminary inter comparison test has been carried out with the XCO2 product of University of Leicester (UoL) full physics algorithm. The bias of 1.2 hPa (~0.1%) and 2.4ppm (~0.6%) of surface pressure and XCO2 between ATANGO and UoL were indicated, and the standard deviation of 2.8hPa (~0.28%) and 1.23ppm (~0.3%) of surface pressure and XCO2 between ATANGO and UoL were found. The Ground-based observation network of XCO2 in China was developed, which include three Fourier transform infrared spectroscopy (IFS-125) over Xinglong, Beijing, Shenzhen, and three Optical Spectrum Analyzers (OSA) over Shangdong, Hainan Island, and Dunhuang, with different latitude and background. The measurement spectrum has been investigated with a

  14. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun

    2013-04-01

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  15. Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO

    PubMed Central

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  16. Design and implementation of hybrid CORDIC algorithm based on phase rotation estimation for NCO.

    PubMed

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  17. Design and Implementation of Broadcast Algorithms for Extreme-Scale Systems

    SciTech Connect

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua

    2011-01-01

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementation of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.

  18. The VLSI design of a Reed-Solomon encoder using Berlekamps bit-serial multiplier algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Deutsch, L. J.; Reed, I. S.; Hsu, I. S.; Wang, K.; Yeh, C. S.

    1982-01-01

    Realization of a bit-serial multiplication algorithm for the encoding of Reed-Solomon (RS) codes on a single VLSI chip using NMOS technology is demonstrated to be feasible. A dual basis (255, 223) over a Galois field is used. The conventional RS encoder for long codes ofter requires look-up tables to perform the multiplication of two field elements. Berlekamp's algorithm requires only shifting and exclusive-OR operations.

  19. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  20. Design of LED-based reflector-array module for specific illuminance distribution

    NASA Astrophysics Data System (ADS)

    Chen, Enguo; Yu, Feihong

    2013-02-01

    This paper presents an efficient and practical design method for a LED based reflector-array lighting module. Improving on previous designs, this method could offer higher design freedom to achieve specific illuminance distribution for actual lighting application and deal with the LED light intensity distribution while shortening the design time. The detailed design description of the lighting system is thoroughly investigated. To demonstrate the effectiveness of this method, an ultra-compact reflector-array module, which produces a rectangular illumination area with a large aspect ratio, is specially designed to meet the high-demanding requirements of industrial lighting application. Design results show that most LED emitting energy could be collected into the required lighting region while higher-brightness and better-uniformity are simultaneously available within the focus region. It is expected that this method will have great potential for other lighting applications.

  1. EEG/ERP adaptive noise canceller design with controlled search space (CSS) approach in cuckoo and other optimization algorithms.

    PubMed

    Ahirwal, M K; Kumar, Anil; Singh, G K

    2013-01-01

    This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively. PMID:24407307

  2. Optimal seismic design of reinforced concrete structures under time-history earthquake loads using an intelligent hybrid algorithm

    NASA Astrophysics Data System (ADS)

    Gharehbaghi, Sadjad; Khatibinia, Mohsen

    2015-03-01

    A reliable seismic-resistant design of structures is achieved in accordance with the seismic design codes by designing structures under seven or more pairs of earthquake records. Based on the recommendations of seismic design codes, the average time-history responses (ATHR) of structure is required. This paper focuses on the optimal seismic design of reinforced concrete (RC) structures against ten earthquake records using a hybrid of particle swarm optimization algorithm and an intelligent regression model (IRM). In order to reduce the computational time of optimization procedure due to the computational efforts of time-history analyses, IRM is proposed to accurately predict ATHR of structures. The proposed IRM consists of the combination of the subtractive algorithm (SA), K-means clustering approach and wavelet weighted least squares support vector machine (WWLS-SVM). To predict ATHR of structures, first, the input-output samples of structures are classified by SA and K-means clustering approach. Then, WWLS-SVM is trained with few samples and high accuracy for each cluster. 9- and 18-storey RC frames are designed optimally to illustrate the effectiveness and practicality of the proposed IRM. The numerical results demonstrate the efficiency and computational advantages of IRM for optimal design of structures subjected to time-history earthquake loads.

  3. Specification and Design of Electrical Flight System Architectures with SysML

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L., Jr.; Jimenez, Alejandro

    2012-01-01

    Modern space flight systems are required to perform more complex functions than previous generations to support space missions. This demand is driving the trend to deploy more electronics to realize system functionality. The traditional approach for the specification, design, and deployment of electrical system architectures in space flight systems includes the use of informal definitions and descriptions that are often embedded within loosely coupled but highly interdependent design documents. Traditional methods become inefficient to cope with increasing system complexity, evolving requirements, and the ability to meet project budget and time constraints. Thus, there is a need for more rigorous methods to capture the relevant information about the electrical system architecture as the design evolves. In this work, we propose a model-centric approach to support the specification and design of electrical flight system architectures using the System Modeling Language (SysML). In our approach, we develop a domain specific language for specifying electrical system architectures, and we propose a design flow for the specification and design of electrical interfaces. Our approach is applied to a practical flight system.

  4. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    PubMed Central

    Ramyachitra, D.; Sofia, M.; Manikandan, P.

    2015-01-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  5. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    PubMed

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  6. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  7. Springback compensation algorithm for tool design in creep age forming of large aluminum alloy plate

    NASA Astrophysics Data System (ADS)

    Xu, Xiaolong; Zhan, Lihua; Huang, Minghui

    2013-12-01

    The creep unified constitutive equations, which was built based on the age forming mechanism of aluminum alloy, was integrated with the commercial finite element analysis software MSC.MARC via the user defined subroutine, CREEP, and the creep age forming process simulations for7055 aluminum alloy plate parts were conducted. Then the springback of the workpiece after forming was calculated by ATOS Professional Software. Based on the combination between simulation results and calculation of springback by ATOS for the formed plate, a new weighted springback compensation algorithm for tool surface modification was developed. The compensate effects between the new algorithm and other overall compensation algorithms on the tool surface are compared. The results show that, the maximal forming error of the workpiece was reduced to below 0.2mm after 5 times compensations with the new weighted algorithm, while error rebound phenomenon occurred and the maximal forming error cannot be reduced to 0.3mm even after 6 times compensations with fixed or variable compensation coefficient, which are based on the overall compensation algorithm.

  8. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  9. Structural, kinetic, and thermodynamic studies of specificity designed HIV-1 protease

    SciTech Connect

    Alvizo, Oscar; Mittal, Seema; Mayo, Stephen L.; Schiffer, Celia A.

    2012-10-23

    HIV-1 protease recognizes and cleaves more than 12 different substrates leading to viral maturation. While these substrates share no conserved motif, they are specifically selected for and cleaved by protease during viral life cycle. Drug resistant mutations evolve within the protease that compromise inhibitor binding but allow the continued recognition of all these substrates. While the substrate envelope defines a general shape for substrate recognition, successfully predicting the determinants of substrate binding specificity would provide additional insights into the mechanism of altered molecular recognition in resistant proteases. We designed a variant of HIV protease with altered specificity using positive computational design methods and validated the design using X-ray crystallography and enzyme biochemistry. The engineered variant, Pr3 (A28S/D30F/G48R), was designed to preferentially bind to one out of three of HIV protease's natural substrates; RT-RH over p2-NC and CA-p2. In kinetic assays, RT-RH binding specificity for Pr3 increased threefold compared to the wild-type (WT), which was further confirmed by isothermal titration calorimetry. Crystal structures of WT protease and the designed variant in complex with RT-RH, CA-p2, and p2-NC were determined. Structural analysis of the designed complexes revealed that one of the engineered substitutions (G48R) potentially stabilized heterogeneous flap conformations, thereby facilitating alternate modes of substrate binding. Our results demonstrate that while substrate specificity could be engineered in HIV protease, the structural pliability of protease restricted the propagation of interactions as predicted. These results offer new insights into the plasticity and structural determinants of substrate binding specificity of the HIV-1 protease.

  10. Computational design of a red fluorophore ligase for site-specific protein labeling in living cells

    DOE PAGESBeta

    Liu, Daniel S.; Nivon, Lucas G.; Richter, Florian; Goldman, Peter J.; Deerinck, Thomas J.; Yao, Jennifer Z.; Richardson, Douglas; Phipps, William S.; Ye, Anne Z.; Ellisman, Mark H.; et al

    2014-10-13

    In this study, chemical fluorophores offer tremendous size and photophysical advantages over fluorescent proteins but are much more challenging to target to specific cellular proteins. Here, we used Rosetta-based computation to design a fluorophore ligase that accepts the red dye resorufin, starting from Escherichia coli lipoic acid ligase. X-ray crystallography showed that the design closely matched the experimental structure. Resorufin ligase catalyzed the site-specific and covalent attachment of resorufin to various cellular proteins genetically fused to a 13-aa recognition peptide in multiple mammalian cell lines and in primary cultured neurons. We used resorufin ligase to perform superresolution imaging of themore » intermediate filament protein vimentin by stimulated emission depletion and electron microscopies. This work illustrates the power of Rosetta for major redesign of enzyme specificity and introduces a tool for minimally invasive, highly specific imaging of cellular proteins by both conventional and superresolution microscopies.« less

  11. Computational design of a red fluorophore ligase for site-specific protein labeling in living cells

    SciTech Connect

    Liu, Daniel S.; Nivon, Lucas G.; Richter, Florian; Goldman, Peter J.; Deerinck, Thomas J.; Yao, Jennifer Z.; Richardson, Douglas; Phipps, William S.; Ye, Anne Z.; Ellisman, Mark H.; Drennan, Catherine L.; Baker, David; Ting, Alice Y.

    2014-10-13

    In this study, chemical fluorophores offer tremendous size and photophysical advantages over fluorescent proteins but are much more challenging to target to specific cellular proteins. Here, we used Rosetta-based computation to design a fluorophore ligase that accepts the red dye resorufin, starting from Escherichia coli lipoic acid ligase. X-ray crystallography showed that the design closely matched the experimental structure. Resorufin ligase catalyzed the site-specific and covalent attachment of resorufin to various cellular proteins genetically fused to a 13-aa recognition peptide in multiple mammalian cell lines and in primary cultured neurons. We used resorufin ligase to perform superresolution imaging of the intermediate filament protein vimentin by stimulated emission depletion and electron microscopies. This work illustrates the power of Rosetta for major redesign of enzyme specificity and introduces a tool for minimally invasive, highly specific imaging of cellular proteins by both conventional and superresolution microscopies.

  12. Design of a G[center dot]C-specific DNA minor groove-binding peptide

    SciTech Connect

    Geierstanger, B.H.; Wemmer, D.E. ); Mrksich, M.; Dervan, P.B. )

    1994-10-28

    A four-ring tripeptide containing alternating imidazole and pyrrole carboxamides specifically binds six-base pair 5[prime]-(A,T)GCGC(A,T)-3[prime] sites in the minor groove of DNA. The designed peptide has a specificity completely reversed from that of the tripyrrole distamycin, which binds A,T sequences. Structural studies with nuclear magnetic resonance revealed that two peptides bound side-by-side and in an antiparallel orientation in the minor groove. Each of the four imidazoles in the 2:1 ligand-DNA complex recognized a specific guanine amino group in the GCGC core through a hydrogen bond. Targeting a designated four-base pair G[center dot]C tract by this synthetic ligand supports the generality of the 2:1 peptide-DNA motif for sequence-specific minor groove recognition of DNA. 24 refs., 4 figs., 1 tab.

  13. Computational design of a red fluorophore ligase for site-specific protein labeling in living cells

    PubMed Central

    Liu, Daniel S.; Nivón, Lucas G.; Richter, Florian; Goldman, Peter J.; Deerinck, Thomas J.; Yao, Jennifer Z.; Richardson, Douglas; Phipps, William S.; Ye, Anne Z.; Ellisman, Mark H.; Drennan, Catherine L.; Baker, David; Ting, Alice Y.

    2014-01-01

    Chemical fluorophores offer tremendous size and photophysical advantages over fluorescent proteins but are much more challenging to target to specific cellular proteins. Here, we used Rosetta-based computation to design a fluorophore ligase that accepts the red dye resorufin, starting from Escherichia coli lipoic acid ligase. X-ray crystallography showed that the design closely matched the experimental structure. Resorufin ligase catalyzed the site-specific and covalent attachment of resorufin to various cellular proteins genetically fused to a 13-aa recognition peptide in multiple mammalian cell lines and in primary cultured neurons. We used resorufin ligase to perform superresolution imaging of the intermediate filament protein vimentin by stimulated emission depletion and electron microscopies. This work illustrates the power of Rosetta for major redesign of enzyme specificity and introduces a tool for minimally invasive, highly specific imaging of cellular proteins by both conventional and superresolution microscopies. PMID:25313043

  14. Using Space Weather Variability in Evaluating the Radiation Environment Design Specifications for NASA's Constellation Program

    NASA Technical Reports Server (NTRS)

    Coffey, Victoria N.; Blackwell, William C.; Minow, Joseph I.; Bruce, Margaret B.; Howard, James W.

    2007-01-01

    NASA's Constellation program, initiated to fulfill the Vision for Space Exploration, will create a new generation of vehicles for servicing low Earth orbit, the Moon, and beyond. Space radiation specifications for space system hardware are necessarily conservative to assure system robustness for a wide range of space environments. Spectral models of solar particle events and trapped radiation belt environments are used to develop the design requirements for estimating total ionizing radiation dose, displacement damage, and single event effects for Constellation hardware. We first describe the rationale using the spectra chosen to establish the total dose and single event design environmental specifications for Constellation systems. We then compare variability of the space environment to the spectral design models to evaluate their applicability as conservative design environments and potential vulnerabilities to extreme space weather events

  15. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  16. Method for Predicting the Energy Characteristics of Li-Ion Cells Designed for High Specific Energy

    NASA Technical Reports Server (NTRS)

    Bennett, William, R.

    2012-01-01

    Novel electrode materials with increased specific capacity and voltage performance are critical to the NASA goals for developing Li-ion batteries with increased specific energy and energy density. Although performance metrics of the individual electrodes are critically important, a fundamental understanding of the interactions of electrodes in a full cell is essential to achieving the desired performance, and for establishing meaningful goals for electrode performance in the first place. This paper presents design considerations for matching positive and negative electrodes in a viable design. Methods for predicting cell-level performance, based on laboratory data for individual electrodes, are presented and discussed.

  17. Two neural network algorithms for designing optimal terminal controllers with open final time

    NASA Technical Reports Server (NTRS)

    Plumer, Edward S.

    1992-01-01

    Multilayer neural networks, trained by the backpropagation through time algorithm (BPTT), have been used successfully as state-feedback controllers for nonlinear terminal control problems. Current BPTT techniques, however, are not able to deal systematically with open final-time situations such as minimum-time problems. Two approaches which extend BPTT to open final-time problems are presented. In the first, a neural network learns a mapping from initial-state to time-to-go. In the second, the optimal number of steps for each trial run is found using a line-search. Both methods are derived using Lagrange multiplier techniques. This theoretical framework is used to demonstrate that the derived algorithms are direct extensions of forward/backward sweep methods used in N-stage optimal control. The two algorithms are tested on a Zermelo problem and the resulting trajectories compare favorably to optimal control results.

  18. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction. PMID:26660897

  19. Mod-5A wind turbine generator program design report. Volume 4: Drawings and specifications, book 3

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The design, development and analysis of the 7.3 MW MOD-5A wind turbine generator is documented. This volume contains the drawings and specifications developed for the final design. This volume is divided into 5 books of which this is the third, containing drawings 47A380074 through 47A380126. A full breakdown parts listing is provided as well as a where used list.

  20. Design of Optimal Treatments for Neuromusculoskeletal Disorders using Patient-Specific Multibody Dynamic Models

    PubMed Central

    Fregly, Benjamin J.

    2011-01-01

    Disorders of the human neuromusculoskeletal system such as osteoarthritis, stroke, cerebral palsy, and paraplegia significantly affect mobility and result in a decreased quality of life. Surgical and rehabilitation treatment planning for these disorders is based primarily on static anatomic measurements and dynamic functional measurements filtered through clinical experience. While this subjective treatment planning approach works well in many cases, it does not predict accurate functional outcome in many others. This paper presents a vision for how patient-specific multibody dynamic models can serve as the foundation for an objective treatment planning approach that identifies optimal treatments and treatment parameters on an individual patient basis. First, a computational paradigm is presented for constructing patient-specific multibody dynamic models. This paradigm involves a combination of patient-specific skeletal models, muscle-tendon models, neural control models, and articular contact models, with the complexity of the complete model being dictated by the requirements of the clinical problem being addressed. Next, three clinical applications are presented to illustrate how such models could be used in the treatment design process. One application involves the design of patient-specific gait modification strategies for knee osteoarthritis rehabilitation, a second involves the selection of optimal patient-specific surgical parameters for a particular knee osteoarthritis surgery, and the third involves the design of patient-specific muscle stimulation patterns for stroke rehabilitation. The paper concludes by discussing important challenges that need to be overcome to turn this vision into reality. PMID:21785529