Science.gov

Sample records for algorithm specifically designed

  1. Design specification for the whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.

    1974-01-01

    The necessary requirements and guidelines for the construction of a computer program of the whole-body algorithm are presented. The minimum subsystem models required to effectively simulate the total body response to stresses of interest are (1) cardiovascular (exercise/LBNP/tilt); (2) respiratory (Grodin's model); (3) thermoregulatory (Stolwijk's model); and (4) long-term circulatory fluid and electrolyte (Guyton's model). The whole-body algorithm must be capable of simulating response to stresses from CO2 inhalation, hypoxia, thermal environmental exercise (sitting and supine), LBNP, and tilt (changing body angles in gravity).

  2. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  3. Molecular beacon sequence design algorithm.

    PubMed

    Monroe, W Todd; Haselton, Frederick R

    2003-01-01

    A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.

  4. comets (Constrained Optimization of Multistate Energies by Tree Search): A Provable and Efficient Protein Design Algorithm to Optimize Binding Affinity and Specificity with Respect to Sequence.

    PubMed

    Hallen, Mark A; Donald, Bruce R

    2016-05-01

    Practical protein design problems require designing sequences with a combination of affinity, stability, and specificity requirements. Multistate protein design algorithms model multiple structural or binding "states" of a protein to address these requirements. comets provides a new level of versatile, efficient, and provable multistate design. It provably returns the minimum with respect to sequence of any desired linear combination of the energies of multiple protein states, subject to constraints on other linear combinations. Thus, it can target nearly any combination of affinity (to one or multiple ligands), specificity, and stability (for multiple states if needed). Empirical calculations on 52 protein design problems showed comets is far more efficient than the previous state of the art for provable multistate design (exhaustive search over sequences). comets can handle a very wide range of protein flexibility and can enumerate a gap-free list of the best constraint-satisfying sequences in order of objective function value. PMID:26761641

  5. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  6. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  7. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  8. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Linden, Derek; Hornby, Greg; Lohn, Jason; Globus, Al; Krishunkumor, K.

    2006-01-01

    Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to

  9. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  10. Fashion sketch design by interactive genetic algorithms

    NASA Astrophysics Data System (ADS)

    Mok, P. Y.; Wang, X. X.; Xu, J.; Kwok, Y. L.

    2012-11-01

    Computer aided design is vitally important for the modern industry, particularly for the creative industry. Fashion industry faced intensive challenges to shorten the product development process. In this paper, a methodology is proposed for sketch design based on interactive genetic algorithms. The sketch design system consists of a sketch design model, a database and a multi-stage sketch design engine. First, a sketch design model is developed based on the knowledge of fashion design to describe fashion product characteristics by using parameters. Second, a database is built based on the proposed sketch design model to define general style elements. Third, a multi-stage sketch design engine is used to construct the design. Moreover, an interactive genetic algorithm (IGA) is used to accelerate the sketch design process. The experimental results have demonstrated that the proposed method is effective in helping laypersons achieve satisfied fashion design sketches.

  11. URPD: a specific product primer design tool

    PubMed Central

    2012-01-01

    Background Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. Findings URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. Conclusions URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/. PMID:22713312

  12. Advanced CHP Control Algorithms: Scope Specification

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  13. On the design, analysis, and implementation of efficient parallel algorithms

    SciTech Connect

    Sohn, S.M.

    1989-01-01

    There is considerable interest in developing algorithms for a variety of parallel computer architectures. This is not a trivial problem, although for certain models great progress has been made. Recently, general-purpose parallel machines have become available commercially. These machines possess widely varying interconnection topologies and data/instruction access schemes. It is important, therefore, to develop methodologies and design paradigms for not only synthesizing parallel algorithms from initial problem specifications, but also for mapping algorithms between different architectures. This work has considered both of these problems. A systolic array consists of a large collection of simple processors that are interconnected in a uniform pattern. The author has studied in detain the problem of mapping systolic algorithms onto more general-purpose parallel architectures such as the hypercube. The hypercube architecture is notable due to its symmetry and high connectivity, characteristics which are conducive to the efficient embedding of parallel algorithms. Although the parallel-to-parallel mapping techniques have yielded efficient target algorithms, it is not surprising that an algorithm designed directly for a particular parallel model would achieve superior performance. In this context, the author has developed hypercube algorithms for some important problems in speech and signal processing, text processing, language processing and artificial intelligence. These algorithms were implemented on a 64-node NCUBE/7 hypercube machine in order to evaluate their performance.

  14. Reflight certification software design specifications

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The PDSS/IMC Software Design Specification for the Payload Development Support System (PDSS)/Image Motion Compensator (IMC) is contained. The PDSS/IMC is to be used for checkout and verification of the IMC flight hardware and software by NASA/MSFC.

  15. Genetic algorithm used in interference filter's design

    NASA Astrophysics Data System (ADS)

    Li, Jinsong; Fang, Ying; Gao, Xiumin

    2009-11-01

    An approach for designing of interference filter is presented by using genetic algorithm (here after refer to as GA) here. We use GA to design band stop filter and narrow-band filter. Interference filter designed here can calculate the optimal reflectivity or transmission rate. Evaluation function used in our genetic algorithm is different from the others before. Using characteristic matrix to calculate the photonic band gap of one-dimensional photonic crystal is similar to electronic structure of doped. If the evaluation is sensitive to the deviation of photonic crystal structure, the approach by genetic algorithm is effective. A summary and explains towards some uncompleted issues are given at the end of this paper.

  16. Fast Fourier Transform algorithm design and tradeoffs

    NASA Technical Reports Server (NTRS)

    Kamin, Ray A., III; Adams, George B., III

    1988-01-01

    The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.

  17. Specific optimization of genetic algorithm on special algebras

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Novak, Vilem; Dyba, Martin; Schenk, Jiri

    2016-06-01

    Searching for complex finite algebras can be succesfully done by the means of genetic algorithm as we showed in former works. This genetic algorithm needs specific optimization of crossover and mutation. We present details about these optimizations which are already implemented in software application for this task - EQCreator.

  18. Instrument design and optimization using genetic algorithms

    SciTech Connect

    Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-10-15

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.

  19. Fuzzy logic and guidance algorithm design

    SciTech Connect

    Leng, G.

    1994-12-31

    This paper explores the use of fuzzy logic for the design of a terminal guidance algorithm for an air to surface missile against a stationary target. The design objectives are (1) a smooth transition, at lock-on, (2) large impact angles and (3) self-limiting acceleration commands. The method of reverse kinematics is used in the design of the membership functions and the rule base. Simulation results for a Mach 0.8 missile with a 6g acceleration limit are compared with a traditional proportional navigation scheme.

  20. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  1. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  2. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms.

    PubMed

    Pacheco, Maria P; Pfau, Thomas; Sauter, Thomas

    2015-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms.

  3. Predicting Resistance Mutations Using Protein Design Algorithms

    SciTech Connect

    Frey, K.; Georgiev, I; Donald, B; Anderson, A

    2010-01-01

    Drug resistance resulting from mutations to the target is an unfortunate common phenomenon that limits the lifetime of many of the most successful drugs. In contrast to the investigation of mutations after clinical exposure, it would be powerful to be able to incorporate strategies early in the development process to predict and overcome the effects of possible resistance mutations. Here we present a unique prospective application of an ensemble-based protein design algorithm, K*, to predict potential resistance mutations in dihydrofolate reductase from Staphylococcus aureus using positive design to maintain catalytic function and negative design to interfere with binding of a lead inhibitor. Enzyme inhibition assays show that three of the four highly-ranked predicted mutants are active yet display lower affinity (18-, 9-, and 13-fold) for the inhibitor. A crystal structure of the top-ranked mutant enzyme validates the predicted conformations of the mutated residues and the structural basis of the loss of potency. The use of protein design algorithms to predict resistance mutations could be incorporated in a lead design strategy against any target that is susceptible to mutational resistance.

  4. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  5. Fast search algorithms for computational protein design.

    PubMed

    Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2016-05-01

    One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms.

  6. Fast search algorithms for computational protein design.

    PubMed

    Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2016-05-01

    One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms. PMID:26833706

  7. Algorithm design of liquid lens inspection system

    NASA Astrophysics Data System (ADS)

    Hsieh, Lu-Lin; Wang, Chun-Chieh

    2008-08-01

    In mobile lens domain, the glass lens is often to be applied in high-resolution requirement situation; but the glass zoom lens needs to be collocated with movable machinery and voice-coil motor, which usually arises some space limits in minimum design. In high level molding component technology development, the appearance of liquid lens has become the focus of mobile phone and digital camera companies. The liquid lens sets with solid optical lens and driving circuit has replaced the original components. As a result, the volume requirement is decreased to merely 50% of the original design. Besides, with the high focus adjusting speed, low energy requirement, high durability, and low-cost manufacturing process, the liquid lens shows advantages in the competitive market. In the past, authors only need to inspect the scrape defect made by external force for the glass lens. As to the liquid lens, authors need to inspect the state of four different structural layers due to the different design and structure. In this paper, authors apply machine vision and digital image processing technology to administer inspections in the particular layer according to the needs of users. According to our experiment results, the algorithm proposed can automatically delete non-focus background, extract the region of interest, find out and analyze the defects efficiently in the particular layer. In the future, authors will combine the algorithm of the system with automatic-focus technology to implement the inside inspection based on the product inspective demands.

  8. Algorithmic design of self-assembling structures

    PubMed Central

    Cohn, Henry; Kumar, Abhinav

    2009-01-01

    We study inverse statistical mechanics: how can one design a potential function so as to produce a specified ground state? In this article, we show that unexpectedly simple potential functions suffice for certain symmetrical configurations, and we apply techniques from coding and information theory to provide mathematical proof that the ground state has been achieved. These potential functions are required to be decreasing and convex, which rules out the use of potential wells. Furthermore, we give an algorithm for constructing a potential function with a desired ground state. PMID:19541660

  9. An effective hybrid cuckoo search and genetic algorithm for constrained engineering design optimization

    NASA Astrophysics Data System (ADS)

    Kanagaraj, G.; Ponnambalam, S. G.; Jawahar, N.; Mukund Nilakantan, J.

    2014-10-01

    This article presents an effective hybrid cuckoo search and genetic algorithm (HCSGA) for solving engineering design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables. The proposed algorithm, HCSGA, is first applied to 13 standard benchmark constrained optimization functions and subsequently used to solve three well-known design problems reported in the literature. The numerical results obtained by HCSGA show competitive performance with respect to recent algorithms for constrained design optimization problems.

  10. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  11. Salt Bridges: Geometrically Specific, Designable Interactions

    PubMed Central

    Donald, Jason E.; Kulp, Daniel W.; DeGrado, William F.

    2010-01-01

    Salt bridges occur frequently in proteins, providing conformational specificity and contributing to molecular recognition and catalysis. We present a comprehensive analysis of these interactions in protein structures by surveying a large database of protein structures. Salt bridges between Asp or Glu and His, Arg, or Lys display extremely well-defined geometric preferences. Several previously observed preferences are confirmed and others that were previously unrecognized are discovered. Salt bridges are explored for their preferences for different separations in sequence and in space, geometric preferences within proteins and at protein-protein interfaces, cooperativity in networked salt bridges, inclusion within metal-binding sites, preference for acidic electrons, apparent conformational side chain entropy reduction upon formation, and degree of burial. Salt bridges occur far more frequently between residues at close than distant sequence separations, but at close distances there remain strong preferences for salt bridges at specific separations. Specific types of complex salt bridges, involving three or more members, are also discovered. As we observe a strong relationship between the propensity to form a salt bridge and the placement of salt-bridging residues in protein sequences, we discuss the role that salt bridges might play in kinetically influencing protein folding and thermodynamically stabilizing the native conformation. We also develop a quantitative method to select appropriate crystal structure resolution and B-factor cutoffs. Detailed knowledge of these geometric and sequence dependences should aid de novo design and prediction algorithms. PMID:21287621

  12. Salt bridges: geometrically specific, designable interactions.

    PubMed

    Donald, Jason E; Kulp, Daniel W; DeGrado, William F

    2011-03-01

    Salt bridges occur frequently in proteins, providing conformational specificity and contributing to molecular recognition and catalysis. We present a comprehensive analysis of these interactions in protein structures by surveying a large database of protein structures. Salt bridges between Asp or Glu and His, Arg, or Lys display extremely well-defined geometric preferences. Several previously observed preferences are confirmed, and others that were previously unrecognized are discovered. Salt bridges are explored for their preferences for different separations in sequence and in space, geometric preferences within proteins and at protein-protein interfaces, co-operativity in networked salt bridges, inclusion within metal-binding sites, preference for acidic electrons, apparent conformational side chain entropy reduction on formation, and degree of burial. Salt bridges occur far more frequently between residues at close than distant sequence separations, but, at close distances, there remain strong preferences for salt bridges at specific separations. Specific types of complex salt bridges, involving three or more members, are also discovered. As we observe a strong relationship between the propensity to form a salt bridge and the placement of salt-bridging residues in protein sequences, we discuss the role that salt bridges might play in kinetically influencing protein folding and thermodynamically stabilizing the native conformation. We also develop a quantitative method to select appropriate crystal structure resolution and B-factor cutoffs. Detailed knowledge of these geometric and sequence dependences should aid de novo design and prediction algorithms.

  13. UWB Tracking System Design with TDOA Algorithm

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan

    2006-01-01

    This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).

  14. Smile design--specific considerations.

    PubMed

    Morley, J

    1997-09-01

    Smile design is a new discipline that has come to the forefront with the recent popularity of cosmetic dentistry techniques. This article explores some of the principles used in smile design that can enhance the esthetics of any anterior restorative procedure.

  15. Algorithmic Processes for Increasing Design Efficiency.

    ERIC Educational Resources Information Center

    Terrell, William R.

    1983-01-01

    Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)

  16. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  17. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  18. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  19. Homolog-specific PCR primer design for profiling splice variants

    PubMed Central

    Srivastava, Gyan Prakash; Hanumappa, Mamatha; Kushwaha, Garima; Nguyen, Henry T.; Xu, Dong

    2011-01-01

    To study functional diversity of proteins encoded from a single gene, it is important to distinguish the expression levels among the alternatively spliced variants. A variant-specific primer pair is required to amplify each alternatively spliced variant individually. For this purpose, we developed a new feature, homolog-specific primer design (HSPD), in our high-throughput primer and probe design software tool, PRIMEGENS-v2. The algorithm uses a de novo approach to design primers without any prior information of splice variants or close homologs for an input query sequence. It not only designs primer pairs but also finds potential isoforms and homologs of the input sequence. Efficiency of this algorithm was tested for several gene families in soybean. A total of 187 primer pairs were tested under five different abiotic stress conditions with three replications at three time points. Results indicate a high success rate of primer design. Some primer pairs designed were able to amplify all splice variants of a gene. Furthermore, by utilizing combinations within the same multiplex pool, we were able to uniquely amplify a specific variant or duplicate gene. Our method can also be used to design PCR primers to specifically amplify homologs in the same gene family. PRIMEGENS-v2 is available at: http://primegens.org. PMID:21415011

  20. GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.

  1. Design and implementation of parallel multigrid algorithms

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tuminaro, Ray S.

    1988-01-01

    Techniques for mapping multigrid algorithms to solve elliptic PDEs on hypercube parallel computers are described and demonstrated. The need for proper data mapping to minimize communication distances is stressed, and an execution-time model is developed to show how algorithm efficiency is affected by changes in the machine and algorithm parameters. Particular attention is then given to the case of coarse computational grids, which can lead to idle processors, load imbalances, and inefficient performance. It is shown that convergence can be improved by using idle processors to solve a new problem concurrently on the fine grid defined by a splitting.

  2. Design of the OMPS limb sensor correction algorithm

    NASA Astrophysics Data System (ADS)

    Jaross, Glen; McPeters, Richard; Seftor, Colin; Kowitt, Mark

    The Sensor Data Records (SDR) for the Ozone Mapping and Profiler Suite (OMPS) on NPOESS (National Polar-orbiting Operational Environmental Satellite System) contains geolocated and calibrated radiances, and are similar to the Level 1 data of NASA Earth Observing System and other programs. The SDR algorithms (one for each of the 3 OMPS focal planes) are the processes by which the Raw Data Records (RDR) from the OMPS sensors are converted into the records that contain all data necessary for ozone retrievals. Consequently, the algorithms must correct and calibrate Earth signals, geolocate the data, and identify and ingest collocated ancillary data. As with other limb sensors, ozone profile retrievals are relatively insensitive to calibration errors due to the use of altitude normalization and wavelength pairing. But the profile retrievals as they pertain to OMPS are not immune from sensor changes. In particular, the OMPS Limb sensor images an altitude range of > 100 km and a spectral range of 290-1000 nm on its detector. Uncorrected sensor degradation and spectral registration drifts can lead to changes in the measured radiance profile, which in turn affects the ozone trend measurement. Since OMPS is intended for long-term monitoring, sensor calibration is a specific concern. The calibration is maintained via the ground data processing. This means that all sensor calibration data, including direct solar measurements, are brought down in the raw data and processed separately by the SDR algorithms. One of the sensor corrections performed by the algorithm is the correction for stray light. The imaging spectrometer and the unique focal plane design of OMPS makes these corrections particularly challenging and important. Following an overview of the algorithm flow, we will briefly describe the sensor stray light characterization and the correction approach used in the code.

  3. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  4. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  5. Engineered waste-package-system design specification

    SciTech Connect

    Not Available

    1983-05-01

    This report documents the waste package performance requirements and geologic and waste form data bases used in developing the conceptual designs for waste packages for salt, tuff, and basalt geologies. The data base reflects the latest geotechnical information on the geologic media of interest. The parameters or characteristics specified primarily cover spent fuel, defense high-level waste, and commercial high-level waste forms. The specification documents the direction taken during the conceptual design activity. A separate design specification will be developed prior to the start of the preliminary design activity.

  6. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem. PMID:15355604

  7. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem.

  8. Formalisms for user interface specification and design

    NASA Technical Reports Server (NTRS)

    Auernheimer, Brent J.

    1989-01-01

    The application of formal methods to the specification and design of human-computer interfaces is described. A broad outline of human-computer interface problems, a description of the field of cognitive engineering and two relevant research results, the appropriateness of formal specification techniques, and potential NASA application areas are described.

  9. Autonomous photogrammetric network design based on changing environment genetic algorithms

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Lu, Nai-Guang; Dong, Mingli

    2008-10-01

    In order to get good accuracy, designer used to consider how to place cameras. Usually, cameras placement design is a multidimensional optimal problem, so people used genetic algorithms to solve it. But genetic algorithms could result in premature or convergent problem. Sometime we get local minimum and observe vibrating phenomenon. Those will get inaccurate design. So we try to solve the problem using the changing environment genetic algorithms. The work proposes giving those species groups difference environment during difference stage to improve the property. Computer simulation result shows the acceleration in convergent speed and ability of selecting good individual. This work would be used in other application.

  10. Domain specific software design for decision aiding

    NASA Technical Reports Server (NTRS)

    Keller, Kirby; Stanley, Kevin

    1992-01-01

    McDonnell Aircraft Company (MCAIR) is involved in many large multi-discipline design and development efforts of tactical aircraft. These involve a number of design disciplines that must be coordinated to produce an integrated design and a successful product. Our interpretation of a domain specific software design (DSSD) is that of a representation or framework that is specialized to support a limited problem domain. A DSSD is an abstract software design that is shaped by the problem characteristics. This parallels the theme of object-oriented analysis and design of letting the problem model directly drive the design. The DSSD concept extends the notion of software reusability to include representations or frameworks. It supports the entire software life cycle and specifically leads to improved prototyping capability, supports system integration, and promotes reuse of software designs and supporting frameworks. The example presented in this paper is the task network architecture or design which was developed for the MCAIR Pilot's Associate program. The task network concept supported both module development and system integration within the domain of operator decision aiding. It is presented as an instance where a software design exhibited many of the attributes associated with DSSD concept.

  11. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  12. A generalized algorithm to design finite field normal basis multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1986-01-01

    Finite field arithmetic logic is central in the implementation of some error-correcting coders and some cryptographic devices. There is a need for good multiplication algorithms which can be easily realized. Massey and Omura recently developed a new multiplication algorithm for finite fields based on a normal basis representation. Using the normal basis representation, the design of the finite field multiplier is simple and regular. The fundamental design of the Massey-Omura multiplier is based on a design of a product function. In this article, a generalized algorithm to locate a normal basis in a field is first presented. Using this normal basis, an algorithm to construct the product function is then developed. This design does not depend on particular characteristics of the generator polynomial of the field.

  13. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  14. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.

    PubMed

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei

    2016-09-01

    Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches.

  15. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.

    PubMed

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei

    2016-09-01

    Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509

  16. Development of a genetic algorithm for molecular scale catalyst design

    SciTech Connect

    McLeod, A.S.; Gladden, L.F.; Johnston, M.E.

    1997-04-01

    A genetic algorithm has been developed to determine the optimal design of a two-component catalyst for the diffusion-limited A + B AB{up_arrow} reaction in which each species is adsorbed specifically on one of two types of sites. Optimization of the distribution of catalytic sites on the surface is achieved by means of an evolutionary algorithm which repeatedly selects the more active surfaces from a population of possible solutions leading to a gradual improvement in the activity of the catalyst surface. A Monte Carlo simulation is used to determine the activity of each of the catalyst surfaces. It is found that for a reacting mixture composed of equal amounts of each component the optimal active site distribution is that of a checkerboard, this solution being approximately 25% more active than a random site distribution. Study of a range of reactant compositions has shown the optimal distribution of catalytically active sites to be dependent on the composition of the ratio of A to B in the reacting mixture. The potential for application of the optimization method introduced here to other catalysts systems is discussed. 27 refs., 7 figs.

  17. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    issues in the GA, it is possible to have idle processors. However, as long as the load at each processing node is similar, the processors are kept busy nearly all of the time. In applying GAs to circuit design, a suitable genetic representation 'is that of a circuit-construction program. We discuss one such circuit-construction programming language and show how evolution can generate useful analog circuit designs. This language has the desirable property that virtually all sets of combinations of primitives result in valid circuit graphs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm and circuit simulation software, we present experimental results as applied to three analog filter and two amplifier design tasks. For example, a figure shows an 85 dB amplifier design evolved by our system, and another figure shows the performance of that circuit (gain and frequency response). In all tasks, our system is able to generate circuits that achieve the target specifications.

  18. Application of Simulated Annealing and Related Algorithms to TWTA Design

    NASA Technical Reports Server (NTRS)

    Radke, Eric M.

    2004-01-01

    decremented and the process repeats. Eventually (and hopefully), a near-globally optimal solution is attained as T approaches zero. Several exciting variants of SA have recently emerged, including Discrete-State Simulated Annealing (DSSA) and Simulated Tempering (ST). The DSSA algorithm takes the thermodynamic analogy one step further by categorizing objective function evaluations into discrete states. In doing so, many of the case-specific problems associated with fine-tuning the SA algorithm can be avoided; for example, theoretical approximations for the initial and final temperature can be derived independently of the case. In this manner, DSSA provides a scheme that is more robust with respect to widely differing design surfaces. ST differs from SA in that the temperature T becomes an additional random variable in the optimization. The system is also kept in equilibrium as the temperature changes, as opposed to the system being driven out of equilibrium as temperature changes in SA. ST is designed to overcome obstacles in design surfaces where numerous local minima are separated by high barriers. These algorithms are incorporated into the optimal design of the traveling-wave tube amplifier (TWTA). The area under scrutiny is the collector, in which it would be ideal to use negative potential to decelerate the spent electron beam to zero kinetic energy just as it reaches the collector surface. In reality this is not plausible due to a number of physical limitations, including repulsion and differing levels of kinetic energy among individual electrons. Instead, the collector is designed with multiple stages depressed below ground potential. The design of this multiple-stage collector is the optimization problem of interest. One remaining problem in SA and DSSA is the difficulty in determining when equilibrium has been reached so that the current Markov chain can be terminated. It has been suggested in recent literature that simulating the thermodynamic properties opecific

  19. Turbine blade fixture design using kinematic methods and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Bausch, John J., III

    2000-10-01

    The design of fixtures for turbine blades is a difficult problem even for experience toolmakers. Turbine blades are characterized by complex 3D surfaces, high performance materials that are difficult to manufacture, close tolerance finish requirements, and high precision machining accuracy. Tool designers typically rely on modified designs based on experience, but have no analytical tools to guide or even evaluate their designs. This paper examines the application of kinematic algorithms to the design of six-point-nest, seventh-point-clamp datum transfer fixtures for turbine blade production. The kinematic algorithms, based on screw coordinate theory, are computationally intensive. When used in a blind search mode the time required to generate an actual design is unreasonable. In order to reduce the computation time, the kinematic methods are combined with genetic algorithms and a set of heuristic design rules to guide the search. The kinematic, genetic, and heuristic methods were integrated within a fixture design module as part of the Unigraphics CAD system used by Pratt and Whitney. The kinematic design module was used to generate a datum transfer fixture design for a standard production turbine blade. This design was then used to construct an actual fixture, and compared to the existing production fixture for the same part. The positional accuracy of both designs was compared using a coordinate measurement machine (CMM). Based on the CMM data, the observed variation of kinematic design was over two orders-of-magnitude less than for the production design resulting in greatly improved accuracy.

  20. Specification of Selected Performance Monitoring and Commissioning Verification Algorithms for CHP Systems

    SciTech Connect

    Brambley, Michael R.; Katipamula, Srinivas

    2006-10-06

    Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.

  1. Designing robust control laws using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Marrison, Chris

    1994-01-01

    The purpose of this research is to create a method of finding practical, robust control laws. The robustness of a controller is judged by Stochastic Robustness metrics and the level of robustness is optimized by searching for design parameters that minimize a robustness cost function.

  2. Design optimization of space launch vehicles using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Bayley, Douglas James

    The United States Air Force (USAF) continues to have a need for assured access to space. In addition to flexible and responsive spacelift, a reduction in the cost per launch of space launch vehicles is also desirable. For this purpose, an investigation of the design optimization of space launch vehicles has been conducted. Using a suite of custom codes, the performance aspects of an entire space launch vehicle were analyzed. A genetic algorithm (GA) was employed to optimize the design of the space launch vehicle. A cost model was incorporated into the optimization process with the goal of minimizing the overall vehicle cost. The other goals of the design optimization included obtaining the proper altitude and velocity to achieve a low-Earth orbit. Specific mission parameters that are particular to USAF space endeavors were specified at the start of the design optimization process. Solid propellant motors, liquid fueled rockets, and air-launched systems in various configurations provided the propulsion systems for two, three and four-stage launch vehicles. Mass properties models, an aerodynamics model, and a six-degree-of-freedom (6DOF) flight dynamics simulator were all used to model the system. The results show the feasibility of this method in designing launch vehicles that meet mission requirements. Comparisons to existing real world systems provide the validation for the physical system models. However, the ability to obtain a truly minimized cost was elusive. The cost model uses an industry standard approach, however, validation of this portion of the model was challenging due to the proprietary nature of cost figures and due to the dependence of many existing systems on surplus hardware.

  3. Acoustic design of rotor blades using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Han, A. Y.; Crossley, W. A.

    1995-01-01

    A genetic algorithm coupled with a simplified acoustic analysis was used to generate low-noise rotor blade designs. The model includes thickness, steady loading and blade-vortex interaction noise estimates. The paper presents solutions for several variations in the fitness function, including thickness noise only, loading noise only, and combinations of the noise types. Preliminary results indicate that the analysis provides reasonable assessments of the noise produced, and that genetic algorithm successfully searches for 'good' designs. The results show that, for a given required thrust coefficient, proper blade design can noticeably reduce the noise produced at some expense to the power requirements.

  4. Mathematically Designing a Local Interaction Algorithm for Decentralized Network Systems

    NASA Astrophysics Data System (ADS)

    Kubo, Takeshi; Hasegawa, Teruyuki; Hasegawa, Toru

    In the near future, decentralized network systems consisting of a huge number of sensor nodes are expected to play an important role. In such a network, each node should control itself by means of a local interaction algorithm. Although such local interaction algorithms improve system reliability, how to design a local interaction algorithm has become an issue. In this paper, we describe a local interaction algorithm in a partial differential equation (or PDE) and propose a new design method whereby a PDE is derived from the solution we desire. The solution is considered as a pattern of nodes' control values over the network each of which is used to control the node's behavior. As a result, nodes collectively provide network functions such as clustering, collision and congestion avoidance. In this paper, we focus on a periodic pattern comprising sinusoidal waves and derive the PDE whose solution exhibits such a pattern by exploiting the Fourier method.

  5. Nonlinear algorithm for task-specific tomosynthetic image reconstruction

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Underhill, Hunter A.; Hemler, Paul F.; Lavery, John E.

    1999-05-01

    This investigation defines and tests a simple, nonlinear, task-specific method for rapid tomosynthetic reconstruction of radiographic images designed to allow an increase in specificity at the expense of sensitivity. Representative lumpectomy specimens containing cancer from human breasts were radiographed with a digital mammographic machine. Resulting projective data were processed to yield a series of tomosynthetic slices distributed throughout the breast. Five board-certified radiologists compared tomographic displays of these tissues processed both linearly (control) and nonlinearly (test) and ranked them in terms of their perceived interpretability. In another task, a different set of nine observers estimated the relative depths of six holes bored in a solid Lucite block as perceived when observed in three dimensions as a tomosynthesized series of test and control slices. All participants preferred the nonlinearly generated tomosynthetic mammograms to those produced conventionally, with or without subsequent deblurring by means of iterative deconvolution. The result was similar (p less than 0.015) when the hole-depth experiment was performed objectively. We therefore conclude for certain tasks that are unduly compromised by tomosynthetic blurring, the nonlinear tomosynthetic reconstruction method described here may improve diagnostic performance with a negligible increase in cost or complexity.

  6. Design of wavelength-selective waveplates using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Katayama, Ryuichi

    2013-03-01

    Wavelength-selective waveplates, which act either identically or differently for plural wavelengths, are useful for optical systems that handle plural wavelengths. However, they cannot be analytically designed because of the complexity of their structure. Genetic algorithm is one of the methods for solving optimization problems and is used for several kinds of optical design (e.g., design of thin films, diffractive optical elements, and lenses). I considered that it is effective for designing wavelength-selective waveplates also and tried to design them using the genetic algorithm for the first time to the best of my knowledge. As a result, four types of wavelength-selective waveplate for three wavelengths (405, 650, and 780 nm) were successfully designed. These waveplates are useful for Blu-ray Disc/Digital Versatile Disc/Compact Disc compatible optical pickups.

  7. An optimal structural design algorithm using optimality criteria

    NASA Technical Reports Server (NTRS)

    Taylor, J. E.; Rossow, M. P.

    1976-01-01

    An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.

  8. Recognition of plant parts with problem-specific algorithms

    NASA Astrophysics Data System (ADS)

    Schwanke, Joerg; Brendel, Thorsten; Jensch, Peter F.; Megnet, Roland

    1994-06-01

    Automatic micropropagation is necessary to produce cost-effective high amounts of biomass. Juvenile plants are dissected in clean- room environment on particular points on the stem or the leaves. A vision-system detects possible cutting points and controls a specialized robot. This contribution is directed to the pattern- recognition algorithms to detect structural parts of the plant.

  9. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    PubMed

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132

  10. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms

    PubMed Central

    Garro, Beatriz A.; Vázquez, Roberto A.

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132

  11. Optimal Design of Geodetic Network Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Vajedian, Sanaz; Bagheri, Hosein

    2010-05-01

    A geodetic network is a network which is measured exactly by techniques of terrestrial surveying based on measurement of angles and distances and can control stability of dams, towers and their around lands and can monitor deformation of surfaces. The main goals of an optimal geodetic network design process include finding proper location of control station (First order Design) as well as proper weight of observations (second order observation) in a way that satisfy all the criteria considered for quality of the network with itself is evaluated by the network's accuracy, reliability (internal and external), sensitivity and cost. The first-order design problem, can be dealt with as a numeric optimization problem. In this designing finding unknown coordinates of network stations is an important issue. For finding these unknown values, network geodetic observations that are angle and distance measurements must be entered in an adjustment method. In this regard, using inverse problem algorithms is needed. Inverse problem algorithms are methods to find optimal solutions for given problems and include classical and evolutionary computations. The classical approaches are analytical methods and are useful in finding the optimum solution of a continuous and differentiable function. Least squares (LS) method is one of the classical techniques that derive estimates for stochastic variables and their distribution parameters from observed samples. The evolutionary algorithms are adaptive procedures of optimization and search that find solutions to problems inspired by the mechanisms of natural evolution. These methods generate new points in the search space by applying operators to current points and statistically moving toward more optimal places in the search space. Genetic algorithm (GA) is an evolutionary algorithm considered in this paper. This algorithm starts with definition of initial population, and then the operators of selection, replication and variation are applied

  12. Space shuttle configuration accounting functional design specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.

  13. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  14. Distributed genetic algorithms for the floorplan design problem

    NASA Technical Reports Server (NTRS)

    Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.

    1991-01-01

    Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.

  15. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  16. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    PubMed

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  17. Designing an Algorithm Animation System To Support Instructional Tasks.

    ERIC Educational Resources Information Center

    Hamilton-Taylor, Ashley George; Kraemer, Eileen

    2002-01-01

    The authors are conducting a study of instructors teaching data structure and algorithm topics, with a focus on the use of diagrams and tracing. The results of this study are being used to inform the design of the Support Kit for Animation (SKA). This article describes a preliminary version of SKA, and possible usage scenarios. (Author/AEF)

  18. USING GENETIC ALGORITHMS TO DESIGN ENVIRONMENTALLY FRIENDLY PROCESSES

    EPA Science Inventory

    Genetic algorithm calculations are applied to the design of chemical processes to achieve improvements in environmental and economic performance. By finding the set of Pareto (i.e., non-dominated) solutions one can see how different objectives, such as environmental and economic ...

  19. Overdetermined broadband spectroscopic Mueller matrix polarimeter designed by genetic algorithms.

    PubMed

    Aas, Lars Martin Sandvik; Ellingsen, Pål Gunnar; Fladmark, Bent Even; Letnes, Paul Anton; Kildemo, Morten

    2013-04-01

    This paper reports on the design and implementation of a liquid crystal variable retarder based overdetermined spectroscopic Mueller matrix polarimeter, with parallel processing of all wavelengths. The system was designed using a modified version of a recently developed genetic algorithm [Letnes et al. Opt. Express 18, 22, 23095 (2010)]. A generalization of the eigenvalue calibration method is reported that allows the calibration of such overdetermined polarimetric systems. Out of several possible designs, one of the designs was experimentally implemented and calibrated. It is reported that the instrument demonstrated good performance, with a measurement accuracy in the range of 0.1% for the measurement of air. PMID:23571964

  20. Food Design Thinking: A Branch of Design Thinking Specific to Food Design

    ERIC Educational Resources Information Center

    Zampollo, Francesca; Peacock, Matthew

    2016-01-01

    Is there a need for a set of methods within Design Thinking tailored specifically for the Food Design process? Is there a need for a branch of Design Thinking dedicated to Food Design alone? Chefs are not generally trained in Design or Design Thinking, and we are only just beginning to understand how they ideate and what recourses are available to…

  1. An Algorithm for the Mixed Transportation Network Design Problem.

    PubMed

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately.

  2. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  3. An Algorithm for the Mixed Transportation Network Design Problem.

    PubMed

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  4. Penetrator reliability investigation and design exploration : from conventional design processes to innovative uncertainty-capturing algorithms.

    SciTech Connect

    Martinez-Canales, Monica L.; Heaphy, Robert; Gramacy, Robert B.; Taddy, Matt; Chiesa, Michael L.; Thomas, Stephen W.; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Trucano, Timothy Guy; Gray, Genetha Anne

    2006-11-01

    This project focused on research and algorithmic development in optimization under uncertainty (OUU) problems driven by earth penetrator (EP) designs. While taking into account uncertainty, we addressed three challenges in current simulation-based engineering design and analysis processes. The first challenge required leveraging small local samples, already constructed by optimization algorithms, to build effective surrogate models. We used Gaussian Process (GP) models to construct these surrogates. We developed two OUU algorithms using 'local' GPs (OUU-LGP) and one OUU algorithm using 'global' GPs (OUU-GGP) that appear competitive or better than current methods. The second challenge was to develop a methodical design process based on multi-resolution, multi-fidelity models. We developed a Multi-Fidelity Bayesian Auto-regressive process (MF-BAP). The third challenge involved the development of tools that are computational feasible and accessible. We created MATLAB{reg_sign} and initial DAKOTA implementations of our algorithms.

  5. Design of SPARC V8 superscalar pipeline applied Tomasulo's algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Yu, Lixin; Feng, Yunkai

    2014-04-01

    A superscalar pipeline applied Tomasulo's algorithm is presented in this paper. The design begins with a dual-issue superscalar processor based on LEON2. Tomasulo's algorithm is adopted to implement out-of-order execution. Instructions are separated into three different parts and executed by three different function units so as to reduce area and promote execution speed. Results wrote back into registers are still in program order, for the aim of ensure the function veracity. Mechanisms of the reservation station, common data bus, and reorder buffer are presented in detail. The structure can sends and executes three instructions at most at a time. Branch prediction can also be realized by reorder buffer. Performance of the scalar pipeline applied Tomasulo's algorithm is promoted by 41.31% compared to singleissue pipeline..

  6. Full design of fuzzy controllers using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Homaifar, Abdollah; Mccormick, ED

    1992-01-01

    This paper examines the applicability of genetic algorithms (GA) in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.

  7. Full design of fuzzy controllers using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Homaifar, Abdollah; Mccormick, ED

    1992-01-01

    This paper examines the applicability of genetic algorithms in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.

  8. A disturbance based control/structure design algorithm

    NASA Technical Reports Server (NTRS)

    Mclaren, Mark D.; Slater, Gary L.

    1989-01-01

    Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.

  9. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  10. Compiler writing system detail design specification. Volume 1: Language specification

    NASA Technical Reports Server (NTRS)

    Arthur, W. J.

    1974-01-01

    Construction within the Meta language for both language and target machine specification is reported. The elements of the function language as a meaning and syntax are presented, and the structure of the target language is described which represents the target dependent object text representation of applications programs.

  11. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  12. Orthogonalizing EM: A design-based least squares algorithm

    PubMed Central

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

    2016-01-01

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

  13. Robust Optimization Design Algorithm for High-Frequency TWTs

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  14. Optimal brushless DC motor design using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Rahideh, A.; Korakianitis, T.; Ruiz, P.; Keeble, T.; Rothman, M. T.

    2010-11-01

    This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using a genetic algorithm. Characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. Electrical and mechanical requirements (i.e. voltage, torque and speed) and other limitations (e.g. upper and lower limits of the motor geometries) are cast into constraints of the optimization problem. One sample case is used to illustrate the design and optimization technique.

  15. Thrust vector control algorithm design for the Cassini spacecraft

    NASA Technical Reports Server (NTRS)

    Enright, Paul J.

    1993-01-01

    This paper describes a preliminary design of the thrust vector control algorithm for the interplanetary spacecraft, Cassini. Topics of discussion include flight software architecture, modeling of sensors, actuators, and vehicle dynamics, and controller design and analysis via classical methods. Special attention is paid to potential interactions with structural flexibilities and propellant dynamics. Controller performance is evaluated in a simulation environment built around a multi-body dynamics model, which contains nonlinear models of the relevant hardware and preliminary versions of supporting attitude determination and control functions.

  16. Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard

    2005-01-01

    Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.

  17. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  18. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  19. An efficient parallel algorithm for accelerating computational protein design

    PubMed Central

    Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang

    2014-01-01

    Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991

  20. Designing neuroclassifier fusion system by immune genetic algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Jimin; Zhao, Heng; Yang, Wanhai

    2001-09-01

    A multiple neural network classifier fusion system design method using immune genetic algorithm (IGA) is proposed. The IGA is modeled after the mechanics of human immunity. By using vaccination and immune selection in the evolution procedures, the IGA outperforms the traditional genetic algorithms in restraining the degenerate phenomenon and increasing the converging speed. The fusion system consists of N neural network classifiers that work independently and in parallel to classify a given input pattern. The classifiers' outputs are aggregated by a fusion scheme to decide the collective classification results. The goal of the system design is to obtain a fusion system with both good generalization and efficiency in space and time. Two kinds of measures, the accuracy of classification and the size of the neural networks, are used by IGA to evaluate the fusion system. The vaccines are abstracted by a self-adaptive scheme during the evolutionary process. A numerical experiment on the 'alternate labels' problem is implemented and the comparisons of IGA with traditional genetic algorithm are presented.

  1. Optimal design of link systems using successive zooming genetic algorithm

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Doo; Sohn, Chang-hyun; Kwon, Soon-Bum; Lim, Jae-gyoo

    2009-07-01

    Link-systems have been around for a long time and are still used to control motion in diverse applications such as automobiles, robots and industrial machinery. This study presents a procedure involving the use of a genetic algorithm for the optimal design of single four-bar link systems and a double four-bar link system used in diesel engine. We adopted the Successive Zooming Genetic Algorithm (SZGA), which has one of the most rapid convergence rates among global search algorithms. The results are verified by experiment and the Recurdyn dynamic motion analysis package. During the optimal design of single four-bar link systems, we found in the case of identical input/output (IO) angles that the initial and final configurations show certain symmetry. For the double link system, we introduced weighting factors for the multi-objective functions, which minimize the difference between output angles, providing balanced engine performance, as well as the difference between final output angle and the desired magnitudes of final output angle. We adopted a graphical method to select a proper ratio between the weighting factors.

  2. Neural-network-biased genetic algorithms for materials design

    NASA Astrophysics Data System (ADS)

    Patra, Tarak; Meenakshisundaram, Venkatesh; Simmons, David

    Machine learning tools have been progressively adopted by the materials science community to accelerate design of materials with targeted properties. However, in the search for new materials exhibiting properties and performance beyond that previously achieved, machine learning approaches are frequently limited by two major shortcomings. First, they are intrinsically interpolative. They are therefore better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require the availability of large datasets, which in some fields are not available and would be prohibitively expensive to produce. Here we describe a new strategy for combining genetic algorithms, neural networks and other machine learning tools, and molecular simulation to discover materials with extremal properties in the absence of pre-existing data. Predictions from progressively constructed machine learning tools are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct molecular dynamics simulation. We survey several initial materials design problems we have addressed with this framework and compare its performance to that of standard genetic algorithm approaches. We acknowledge the W. M. Keck Foundation for support of this work.

  3. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  4. OPTIMASK: an OPC algorithm for chrome and phase-shift mask design

    NASA Astrophysics Data System (ADS)

    Barouch, Eytan; Hollerbach, Uwe; Vallishayee, Rakesh R.

    1995-05-01

    A mask correction algorithm (OPTIMASK) has been designed and implemented. Its main ingredients are optical proximity correction (OPC) and optical design rule checker (ODRC). The algorithm is based on the lithographic notion that a mask has to print throughout its defocus budget, taking into account multiple defocus planes. In each defocus plane the aerial image is computed using FAIM, and the design failures are reported via ODRC. The mask correction is subjected to physical restrictions that do not allow any feature couplings to occur. The union of the failures at all defocus values determines the first step taken in correcting the mask. Then a (constrained) Newton optimization scheme is applied to optimize line shrinkage, linewidth control, and corner rounding errors. All the tools needed to optimize a specific layer within a particular cell and return the optimized layer to the original mask file have been implemented. Several examples will be shown.

  5. Design principles and algorithms for automated air traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  6. Design Principles and Algorithms for Air Traffic Arrival Scheduling

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Itoh, Eri

    2014-01-01

    This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.

  7. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Separator: Design specification. 162.050-21 Section 162... Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an.... (c) Each separator component that is a moving part must be designed so that its movement...

  8. Bias and design in software specifications

    NASA Technical Reports Server (NTRS)

    Straub, Pablo A.; Zelkowitz, Marvin V.

    1990-01-01

    Implementation bias in a specification is an arbitrary constraint in the solution space. Presented here is a model of bias in software specifications. Bias is defined in terms of the specification process and a classification of the attributes of the software product. Our definition of bias provides insight into both the origin and the consequences of bias. It also shows that bias is relative and essentially unavoidable. Finally, we describe current work on defining a measure of bias, formalizing our model, and relating bias to software defects.

  9. As-built design specification for MISMAP

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Cheng, D. E.; Tompkins, M. A. (Principal Investigator)

    1981-01-01

    The MISMAP program, which is part of the CLASFYT package, is described. The program is designed to compare classification values with ground truth values for a segment and produce a comparison map and summary table.

  10. Radiological containment selection, design, and specification guide

    SciTech Connect

    Brown, R.L.

    1994-11-01

    This document provides guidance to Tank Waste Remediation Systems personnel in determining what containment is appropriate for work activities, what containments are available, general applications of each, design criteria, and other information needed to make informed decisions concerning containment application.

  11. Experimental designs for small randomised clinical trials: an algorithm for choice

    PubMed Central

    2013-01-01

    Background Small clinical trials are necessary when there are difficulties in recruiting enough patients for conventional frequentist statistical analyses to provide an appropriate answer. These trials are often necessary for the study of rare diseases as well as specific study populations e.g. children. It has been estimated that there are between 6,000 and 8,000 rare diseases that cover a broad range of diseases and patients. In the European Union these diseases affect up to 30 million people, with about 50% of those affected being children. Therapies for treating these rare diseases need their efficacy and safety evaluated but due to the small number of potential trial participants, a standard randomised controlled trial is often not feasible. There are a number of alternative trial designs to the usual parallel group design, each of which offers specific advantages, but they also have specific limitations. Thus the choice of the most appropriate design is not simple. Methods PubMed was searched to identify publications about the characteristics of different trial designs that can be used in randomised, comparative small clinical trials. In addition, the contents tables from 11 journals were hand-searched. An algorithm was developed using decision nodes based on the characteristics of the identified trial designs. Results We identified 75 publications that reported the characteristics of 12 randomised, comparative trial designs that can be used in for the evaluation of therapies in orphan diseases. The main characteristics and the advantages and limitations of these designs were summarised and used to develop an algorithm that may be used to help select an appropriate design for a given clinical situation. We used examples from publications of given disease-treatment-outcome situations, in which the investigators had used a particular trial design, to illustrate the use of the algorithm for the identification of possible alternative designs. Conclusions The

  12. Optimal robust motion controller design using multiobjective genetic algorithm.

    PubMed

    Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm-differential evolution. PMID:24987749

  13. The McCollough Facial Rejuvenation System: expanding the scope of a condition-specific algorithm.

    PubMed

    McCollough, E Gaylon; Ha, Chi D

    2012-02-01

    importantly, a condition-specific system matches each potential patient's problems--at every age--with the appropriate facial rejuvenation treatment plan, restoring the ideals of science and art to the profession. Initially provided in a consumer information book devised to assist patients with understanding the advantages of personalized treatment plans, the senior author later shared his practices and evolving system with colleagues attending conventions, seminars, and courses. Only after he was convinced that his system could be of benefit to physicians and surgeons from a variety of backgrounds was it offered to the peer-reviewed medical literature. Clearly, a plethora of techniques and materials are available for facial rejuvenation; however, only the ones deemed to be worthy of consideration were included. In practice--and in this presentation--the authors expanded the scope of the previously published article and offer a user-friendly, condition-specific worksheet and algorithmic tables designed to make it easier for surgeons to select the right combinations of procedures--at the right time in a patient's life. Although imitations potentiate an environment of disharmony, the authors remain committed to enabling the evolution of a single facial rejuvenation classification system, one that--with the input of like-minded scholars--could restore needed order to a branch of the medical profession that, in recent years, seems to have lost its focus.

  14. Computational Tools and Algorithms for Designing Customized Synthetic Genes

    PubMed Central

    Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris

    2014-01-01

    Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050

  15. Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2015-06-01

    Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of "unintelligent design"; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning.

  16. IMCS reflight certification requirements and design specifications

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The requirements for reflight certification are established. Software requirements encompass the software programs that are resident in the PCC, DEP, PDSS, EC, or any related GSE. A design approach for the reflight software packages is recommended. These designs will be of sufficient detail to permit the implementation of reflight software. The PDSS/IMC Reflight Certification system provides the tools and mechanisms for the user to perform the reflight certification test procedures, test data capture, test data display, and test data analysis. The system as defined will be structured to permit maximum automation of reflight certification procedures and test data analysis.

  17. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  18. Genetic algorithm to optimize the design of main combustor and gas generator in liquid rocket engines

    NASA Astrophysics Data System (ADS)

    Son, Min; Ko, Sangho; Koo, Jaye

    2014-06-01

    A genetic algorithm was used to develop optimal design methods for the regenerative cooled combustor and fuel-rich gas generator of a liquid rocket engine. For the combustor design, a chemical equilibrium analysis was applied, and the profile was calculated using Rao's method. One-dimensional heat transfer was assumed along the profile, and cooling channels were designed. For the gas-generator design, non-equilibrium properties were derived from a counterflow analysis, and a vaporization model for the fuel droplet was adopted to calculate residence time. Finally, a genetic algorithm was adopted to optimize the designs. The combustor and gas generator were optimally designed for 30-tonf, 75-tonf, and 150-tonf engines. The optimized combustors demonstrated superior design characteristics when compared with previous non-optimized results. Wall temperatures at the nozzle throat were optimized to satisfy the requirement of 800 K, and specific impulses were maximized. In addition, the target turbine power and a burned-gas temperature of 1000 K were obtained from the optimized gas-generator design.

  19. Geometric design of mechanical linkages for contact specifications

    NASA Astrophysics Data System (ADS)

    Robson, Nina Patarinsky

    2008-10-01

    This dissertation focuses on the kinematic synthesis of mechanical linkages in order to guide an end-effortor so that it maintains contact with specified objects in its workspace. Assuming the serial chain does not have full mobility in its workspace, the contact geometry is used to determine the dimensions of the serial chain. The approach to this problem, is to use the relative curvature of the contact of the end-effector with one or more objects to define velocity and acceleration specifications for its movement. This provides kinematic constraints that are used to synthesize the dimensions of the serial chain. The mathematical formulation of the geometric design problem, leads to systems of multivariable polynomial equations, which are solved exactly using sparse matrix resultants and polynomial homotopy methods. The results from this research yield planar RR and 4R linkages that match a specified contact geometry, spatial TS, parallel RRS and perpendicular RRS linkages that have a required acceleration specification. A new strategy for a robot recovery from actuator failures is demonstrated for the Mars Exploratory Rover Arm. In extending this work to spatial serial chains, a new method based on sparse matrix resultants was developed, which solves exact synthesis problems with acceleration constraints. Further the research builds on the theoretical concepts of contact relationships for spatial movement. The connection between kinematic synthesis and contact problems and its extension to spatial synthesis are developed in this dissertation for the first time and are new contributions. The results, which rely upon the use of surface curvature effects to reduce the number of fixtures needed to immobilize an object, find applications in robot grasping and part-fixturing. The recovery strategy, presented in this research is also a new concept. The recognition that it is possible to reconfigure a crippled robotic system to achieve mission critical tasks can guide

  20. 46 CFR 162.050-33 - Bilge alarm: Design specification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Bilge alarm: Design specification. 162.050-33 Section....050-33 Bilge alarm: Design specification. (a) This section contains requirements that apply to bilge alarms. (b) Each bilge alarm must be designed to meet the requirements for an oil content meter in §...

  1. Advanced algorithms for radiographic material discrimination and inspection system design

    NASA Astrophysics Data System (ADS)

    Gilbert, Andrew J.; McDonald, Benjamin S.; Deinert, Mark R.

    2016-10-01

    X-ray and neutron radiography are powerful tools for non-invasively inspecting the interior of objects. However, current methods are limited in their ability to differentiate materials when multiple materials are present, especially within large and complex objects. Past work has demonstrated that the spectral shift that X-ray beams undergo in traversing an object can be used to detect and quantify nuclear materials. The technique uses a spectrally sensitive detector and an inverse algorithm that varies the composition of the object until the X-ray spectrum predicted by X-ray transport matches the one measured. Here we show that this approach can be adapted to multi-mode radiography, with energy integrating detectors, and that the Cramér-Rao lower bound can be used to choose an optimal set of inspection modes a priori. We consider multi-endpoint X-ray radiography alone, or in combination with neutron radiography using deuterium-deuterium (DD) or deuterium-tritium (DT) sources. We show that for an optimal mode choice, the algorithm can improve discrimination between high-Z materials, specifically between tungsten and plutonium, and estimate plutonium mass within a simulated nuclear material storage system to within 1%.

  2. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  3. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  4. Algorithm design for a gun simulator based on image processing

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wei, Ping; Ke, Jun

    2015-08-01

    In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.

  5. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  6. AFEII Analog Front End Board Design Specifications

    SciTech Connect

    Rubinov, Paul; /Fermilab

    2005-04-01

    This document describes the design of the 2nd iteration of the Analog Front End Board (AFEII), which has the function of receiving charge signals from the Central Fiber Tracker (CFT) and providing digital hit pattern and charge amplitude information from those charge signals. This second iteration is intended to address limitations of the current AFE (referred to as AFEI in this document). These limitations become increasingly deleterious to the performance of the Central Fiber Tracker as instantaneous luminosity increases. The limitations are inherent in the design of the key front end chips on the AFEI board (the SVXIIe and the SIFT) and the architecture of the board itself. The key limitations of the AFEI are: (1) SVX saturation; (2) Discriminator to analog readout cross talk; (3) Tick to tick pedestal variation; and (4) Channel to channel pedestal variation. The new version of the AFE board, AFEII, addresses these limitations by use of a new chip, the TriP-t and by architectural changes, while retaining the well understood and desirable features of the AFEI board.

  7. Sum-of-squares-based fuzzy controller design using quantum-inspired evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Gwo-Ruey; Huang, Yu-Chia; Cheng, Chih-Yung

    2016-07-01

    In the field of fuzzy control, control gains are obtained by solving stabilisation conditions in linear-matrix-inequality-based Takagi-Sugeno fuzzy control method and sum-of-squares-based polynomial fuzzy control method. However, the optimal performance requirements are not considered under those stabilisation conditions. In order to handle specific performance problems, this paper proposes a novel design procedure with regard to polynomial fuzzy controllers using quantum-inspired evolutionary algorithms. The first contribution of this paper is a combination of polynomial fuzzy control and quantum-inspired evolutionary algorithms to undertake an optimal performance controller design. The second contribution is the proposed stability condition derived from the polynomial Lyapunov function. The proposed design approach is dissimilar to the traditional approach, in which control gains are obtained by solving the stabilisation conditions. The first step of the controller design uses the quantum-inspired evolutionary algorithms to determine the control gains with the best performance. Then, the stability of the closed-loop system is analysed under the proposed stability conditions. To illustrate effectiveness and validity, the problem of balancing and the up-swing of an inverted pendulum on a cart is used.

  8. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an angle of 22.5° with the plane of its normal operating position. (b) The electrical components of...

  9. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an angle of 22.5° with the plane of its normal operating position. (b) The electrical components of...

  10. Algorithm to design inhibitors using stereochemically mixed l,d polypeptides: Validation against HIV protease.

    PubMed

    Gupta, Pooja; Durani, Susheel

    2015-11-01

    Polypeptides have potential to be designed as drugs or inhibitors against the desired targets. In polypeptides, every chiral α-amino acid has enantiomeric structural possibility to become l or d amino acids and can be used as design monomer. Among the various possibilities, use of stereochemistry as a design tool has potential to determine both functional specificity and metabolic stability of the designed polypeptides. The polypeptides with mixed l,d amino acids are a class of peptidomimitics, an attractive drug like molecules and also less susceptible to proteolytic activities. Therefore in this study, a three step algorithm is proposed to design the polypeptides against desired drug targets. For this, all possible configurational isomers of mixed l,d polyleucine (Ac-Leu8-NHMe) structure were randomly modeled with simulated annealing molecular dynamics and the resultant library of discrete folds were scored against HIV protease as a model target. The best scored folds of mixed l,d structures were inverse optimized for sequences in situ and the resultant sequences as inhibitors were validated for conformational integrity using molecular dynamics. This study presents and validates an algorithm to design polypeptides of mixed l,d structures as drugs/inhibitors by inverse fitting them as molecular ligands against desired target.

  11. Design of Protein-Protein Interactions with a Novel Ensemble-Based Scoring Algorithm

    NASA Astrophysics Data System (ADS)

    Roberts, Kyle E.; Cushing, Patrick R.; Boisguerin, Prisca; Madden, Dean R.; Donald, Bruce R.

    Protein-protein interactions (PPIs) are vital for cell signaling, protein trafficking and localization, gene expression, and many other biological functions. Rational modification of PPI targets provides a mechanism to understand their function and importance. However, PPI systems often have many more degrees of freedom and flexibility than the small-molecule binding sites typically targeted by protein design algorithms. To handle these challenging design systems, we have built upon the computational protein design algorithm K * [8,19] to develop a new design algorithm to study protein-protein and protein-peptide interactions. We validated our algorithm through the design and experimental testing of novel peptide inhibitors.

  12. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  13. A homogeneous superconducting magnet design using a hybrid optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ni, Zhipeng; Wang, Qiuliang; Liu, Feng; Yan, Luguang

    2013-12-01

    This paper employs a hybrid optimization algorithm with a combination of linear programming (LP) and nonlinear programming (NLP) to design the highly homogeneous superconducting magnets for magnetic resonance imaging (MRI). The whole work is divided into two stages. The first LP stage provides a global optimal current map with several non-zero current clusters, and the mathematical model for the LP was updated by taking into account the maximum axial and radial magnetic field strength limitations. In the second NLP stage, the non-zero current clusters were discretized into practical solenoids. The superconducting conductor consumption was set as the objective function both in the LP and NLP stages to minimize the construction cost. In addition, the peak-peak homogeneity over the volume of imaging (VOI), the scope of 5 Gauss fringe field, and maximum magnetic field strength within superconducting coils were set as constraints. The detailed design process for a dedicated 3.0 T animal MRI scanner was presented. The homogeneous magnet produces a magnetic field quality of 6.0 ppm peak-peak homogeneity over a 16 cm by 18 cm elliptical VOI, and the 5 Gauss fringe field was limited within a 1.5 m by 2.0 m elliptical region.

  14. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  15. Development of hybrid genetic algorithms for product line designs.

    PubMed

    Balakrishnan, P V Sundar; Gupta, Rakesh; Jacob, Varghese S

    2004-02-01

    In this paper, we investigate the efficacy of artificial intelligence (AI) based meta-heuristic techniques namely genetic algorithms (GAs), for the product line design problem. This work extends previously developed methods for the single product design problem. We conduct a large scale simulation study to determine the effectiveness of such an AI based technique for providing good solutions and bench mark the performance of this against the current dominant approach of beam search (BS). We investigate the potential advantages of pursuing the avenue of developing hybrid models and then implement and study such hybrid models using two very distinct approaches: namely, seeding the initial GA population with the BS solution, and employing the BS solution as part of the GA operator's process. We go on to examine the impact of two alternate string representation formats on the quality of the solutions obtained by the above proposed techniques. We also explicitly investigate a critical managerial factor of attribute importance in terms of its impact on the solutions obtained by the alternate modeling procedures. The alternate techniques are then evaluated, using statistical analysis of variance, on a fairy large number of data sets, as to the quality of the solutions obtained with respect to the state-of-the-art benchmark and in terms of their ability to provide multiple, unique product line options.

  16. Orbit design and estimation for surveillance missions using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Abdelkhalik, Osama Mohamed Omar

    2005-11-01

    The problem of observing a given set of Earth target sites within an assigned time frame is examined. Attention is given mainly to visiting these sites as sub-satellite nadir points. Solutions to this problem in the literature require thrusters to continuously maneuver the satellite from one site to another. A natural solution is proposed. A natural solution is a gravitational orbit that enables the spacecraft to satisfy the mission requirements without maneuvering. Optimization of a penalty function is performed to find natural solutions for satellite orbit configurations. This penalty function depends on the mission objectives. Two mission objectives are considered: maximum observation time and maximum resolution. The penalty function poses multi minima and a genetic algorithm technique is used to solve this problem. In the case that there is no one orbit satisfying the mission requirements, a multi-orbit solution is proposed. In a multi-orbit solution, the set of target sites is split into two groups. Then the developed algorithm is used to search for a natural solution for each group. The satellite has to be maneuvered between the two solution orbits. Genetic algorithms are used to find the optimal orbit transfer between the two orbits using impulsive thrusters. A new formulation for solving the orbit maneuver problem using genetic algorithms is developed. The developed formulation searches for a minimum fuel consumption maneuver and guarantees that the satellite will be transferred exactly to the final orbit even if the solution is non-optimal. The results obtained demonstrate the feasibility of finding natural solutions for many case studies. The problem of the design of suitable satellite constellation for Earth observing applications is addressed. Two cases are considered. The first is the remote sensing missions for a particular region with high frequency and small swath width. The second is the interferometry radar Earth observation missions. In satellite

  17. An advancing front Delaunay triangulation algorithm designed for robustness

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1993-01-01

    A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.

  18. An advancing front Delaunay triangulation algorithm designed for robustness

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1992-01-01

    A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.

  19. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... MATERIALS: SPECIFICATIONS AND APPROVAL ENGINEERING EQUIPMENT Pollution Prevention Equipment § 162.050-21 Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an angle of 22.5° with the plane of its normal operating position. (b) The electrical components of...

  20. Gateway design specification for fiber optic local area networks

    NASA Technical Reports Server (NTRS)

    1985-01-01

    This is a Design Specification for a gateway to interconnect fiber optic local area networks (LAN's). The internetworking protocols for a gateway device that will interconnect multiple local area networks are defined. This specification serves as input for preparation of detailed design specifications for the hardware and software of a gateway device. General characteristics to be incorporated in the gateway such as node address mapping, packet fragmentation, and gateway routing features are described.

  1. Functional design specification for the problem data system. [space shuttle

    NASA Technical Reports Server (NTRS)

    Boatman, T. W.

    1975-01-01

    The purpose of the Functional Design Specification is to outline the design for the Problem Data System. The Problem Data System is a computer-based data management system designed to track the status of problems and corrective actions pertinent to space shuttle hardware.

  2. UXO Engineering Design. Technical Specification and ConceptualDesign

    SciTech Connect

    Beche, J-F.; Doolittle, L.; Greer, J.; Lafever, R.; Radding, Z.; Ratti, A.; Yaver, H.; Zimmermann, S.

    2005-04-23

    The design and fabrication of the UXO detector has numerous challenges and is an important component to the success of this study. This section describes the overall engineering approach, as well as some of the technical details that brought us to the present design. In general, an array of sensor coils is measuring the signal generated by the UXO object in response to a stimulation provided by the driver coil. The information related to the location, shape and properties of the object is derived from the analysis of the measured data. Each sensor coil is instrumented with a waveform digitizer operating at a nominal digitization rate of 100 kSamples per second. The sensor coils record both the large transient pulse of the driver coil and the UXO object response pulse. The latter is smaller in amplitude and must be extracted from the large transient signal. The resolution required is 16 bits over a dynamic range of at least 140 dB. The useful signal bandwidth of the application extends from DC to 40 kHz. The low distortion of each component is crucial in order to maintain an excellent linearity over the full dynamic range and to minimize the calibration procedure. The electronics must be made as compact as possible so that the response of its metallic parts has a minimum signature response. Also because of a field system portability requirement, the power consumption of the instrument must be kept as low as possible. The theory and results of numerical and experimental studies that led to the proof-of-principle multitransmitter-multireceiver Active ElectroMagnetic (AEM) system, that can not only accurately detect but also characterize and discriminate UXO targets, are summarized in LBNL report-53962: ''Detection and Classification of Buried Metallic Objects, UX-1225''.

  3. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  4. Transitioning from conceptual design to construction performance specification

    NASA Astrophysics Data System (ADS)

    Jeffers, Paul; Warner, Mark; Craig, Simon; Hubbard, Robert; Marshall, Heather

    2012-09-01

    On successful completion of a conceptual design review by a funding agency or customer, there is a transition phase before construction contracts can be placed. The nature of this transition phase depends on the Project's approach to construction and the particular subsystem being considered. There are generically two approaches; project retention of design authority and issuance of build to print contracts, or issuance of subsystem performance specifications with controlled interfaces. This paper relates to the latter where a proof of concept (conceptual or reference design) is translated into performance based sub-system specifications for competitive tender. This translation is not a straightforward process and there are a number of different issues to consider in the process. This paper deals with primarily the Telescope mount and Enclosure subsystems. The main subjects considered in this paper are: • Typical status of design at Conceptual Design Review compared with the desired status of Specifications and Interface Control Documents at Request for Quotation. • Options for capture and tracking of system requirements flow down from science / operating requirements and sub-system requirements, and functional requirements derived from reference design. • Requirements that may come specifically from the contracting approach. • Methods for effective use of reference design work without compromising a performance based specification. • Management of project team's expectation relating to design. • Effects on cost estimates from reference design to actual. This paper is based on experience and lessons learned through this process on both the VISTA and the ATST projects.

  5. Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    NASA Technical Reports Server (NTRS)

    Keller, Richard M. (Editor); Barstow, David; Lowry, Michael R.; Tong, Christopher H.

    1992-01-01

    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface.

  6. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  7. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  8. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  9. SEPAC flight software detailed design specifications, volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The detailed design specifications (as built) for the SEPAC Flight Software are defined. The design includes a description of the total software system and of each individual module within the system. The design specifications describe the decomposition of the software system into its major components. The system structure is expressed in the following forms: the control-flow hierarchy of the system, the data-flow structure of the system, the task hierarchy, the memory structure, and the software to hardware configuration mapping. The component design description includes details on the following elements: register conventions, module (subroutines) invocaton, module functions, interrupt servicing, data definitions, and database structure.

  10. Epitope-Specific Binder Design by Yeast Surface Display.

    PubMed

    Mann, Jasdeep K; Park, Sheldon

    2015-01-01

    Yeast surface display is commonly used to engineer affinity and design novel molecular interaction. By alternating positive and negative selections, yeast display can be used to engineer binders that specifically interact with the target protein at a defined site. Epitope-specific binders can be useful as inhibitors if they bind the target molecule at functionally important sites. Therefore, an efficient method of engineering epitope specificity should help with the engineering of inhibitors. We describe the use of yeast surface display to design single domain monobodies that bind and inhibit the activity of the kinase Erk-2 by targeting a conserved surface patch involved in protein-protein interaction. The designed binders can be used to disrupt signaling in the cell and investigate Erk-2 function in vivo. The described protocol is general and can be used to design epitope-specific binders of an arbitrary protein. PMID:26060073

  11. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  12. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.

  13. The potential of genetic algorithms for conceptual design of rotor systems

    NASA Technical Reports Server (NTRS)

    Crossley, William A.; Wells, Valana L.; Laananen, David H.

    1993-01-01

    The capabilities of genetic algorithms as a non-calculus based, global search method make them potentially useful in the conceptual design of rotor systems. Coupling reasonably simple analysis tools to the genetic algorithm was accomplished, and the resulting program was used to generate designs for rotor systems to match requirements similar to those of both an existing helicopter and a proposed helicopter design. This provides a comparison with the existing design and also provides insight into the potential of genetic algorithms in design of new rotors.

  14. Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...

  15. A Proposed India-Specific Algorithm for Management of Type 2 Diabetes.

    PubMed

    2016-06-01

    Several algorithms and guidelines have been proposed by countries and international professional bodies; however, no recent updated management algorithm is available for Asian Indians. Specifically, algorithms developed and validated in developed nations may not be relevant or applicable to patients in India because of several factors: early age of onset of diabetes, occurrence of diabetes in nonobese and sometimes lean people, differences in the relative contributions of insulin resistance and β-cell dysfunction, marked postprandial glycemia, frequent infections including tuberculosis, low access to healthcare and medications in people of low socioeconomic stratum, ethnic dietary practices (e.g., ingestion of high-carbohydrate diets), and inadequate education regarding hypoglycemia. All these factors should be considered to choose appropriate therapeutic option in this population. The proposed algorithm is simple, suggests less expensive drugs, and tries to provide an effective and comprehensive framework for delivery of diabetes therapy in primary care in India. The proposed guidelines agree with international recommendations in favoring individualization of therapeutic targets as well as modalities of treatment in a flexible manner suitable to the Indian population. PMID:26909751

  16. A Learning Design Ontology Based on the IMS Specification

    ERIC Educational Resources Information Center

    Amorim, Ricardo R.; Lama, Manuel; Sanchez, Eduardo; Riera, Adolfo; Vila, Xose A.

    2006-01-01

    In this paper, we present an ontology to represent the semantics of the IMS Learning Design (IMS LD) specification, a meta-language used to describe the main elements of the learning design process. The motivation of this work relies on the expressiveness limitations found on the current XML-Schema implementation of the IMS LD conceptual model. To…

  17. Electrostatic precipitator guidelines: Volume 1, Design specifications: Final report

    SciTech Connect

    Altin, C.A.; Grieco, G.J.

    1987-06-01

    The report includes three companion manuals for design specifications, operations and maintenance, and troubleshooting. Although the manuals primarily address users having some knowledge of precipitator design and operation, they provide enough background material and precipitator theory to make them useful as training aids. The loose-leaf format will allow updating.

  18. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  19. Specificity and Sensitivity of Claims-Based Algorithms for Identifying Members of Medicare+Choice Health Plans That Have Chronic Medical Conditions

    PubMed Central

    Rector, Thomas S; Wickstrom, Steven L; Shah, Mona; Thomas Greeenlee, N; Rheault, Paula; Rogowski, Jeannette; Freedman, Vicki; Adams, John; Escarce, José J

    2004-01-01

    Objective To examine the effects of varying diagnostic and pharmaceutical criteria on the performance of claims-based algorithms for identifying beneficiaries with hypertension, heart failure, chronic lung disease, arthritis, glaucoma, and diabetes. Study Setting Secondary 1999–2000 data from two Medicare+Choice health plans. Study Design Retrospective analysis of algorithm specificity and sensitivity. Data Collection Physician, facility, and pharmacy claims data were extracted from electronic records for a sample of 3,633 continuously enrolled beneficiaries who responded to an independent survey that included questions about chronic diseases. Principal Findings Compared to an algorithm that required a single medical claim in a one-year period that listed the diagnosis, either requiring that the diagnosis be listed on two separate claims or that the diagnosis to be listed on one claim for a face-to-face encounter with a health care provider significantly increased specificity for the conditions studied by 0.03 to 0.11. Specificity of algorithms was significantly improved by 0.03 to 0.17 when both a medical claim with a diagnosis and a pharmacy claim for a medication commonly used to treat the condition were required. Sensitivity improved significantly by 0.01 to 0.20 when the algorithm relied on a medical claim with a diagnosis or a pharmacy claim, and by 0.05 to 0.17 when two years rather than one year of claims data were analyzed. Algorithms that had specificity more than 0.95 were found for all six conditions. Sensitivity above 0.90 was not achieved all conditions. Conclusions Varying claims criteria improved the performance of case-finding algorithms for six chronic conditions. Highly specific, and sometimes sensitive, algorithms for identifying members of health plans with several chronic conditions can be developed using claims data. PMID:15533190

  20. Design Principles of Regulatory Networks: Searching for the Molecular Algorithms of the Cell

    PubMed Central

    Lim, Wendell A.; Lee, Connie M.; Tang, Chao

    2013-01-01

    A challenge in biology is to understand how complex molecular networks in the cell execute sophisticated regulatory functions. Here we explore the idea that there are common and general principles that link network structures to biological functions, principles that constrain the design solutions that evolution can converge upon for accomplishing a given cellular task. We describe approaches for classifying networks based on abstract architectures and functions, rather than on the specific molecular components of the networks. For any common regulatory task, can we define the space of all possible molecular solutions? Such inverse approaches might ultimately allow the assembly of a design table of core molecular algorithms that could serve as a guide for building synthetic networks and modulating disease networks. PMID:23352241

  1. A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme

    2014-03-01

    Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

  2. Designing a mirrored Howland circuit with a particle swarm optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Bertemes-Filho, Pedro; Negri, Lucas H.; Vincence, Volney C.

    2016-06-01

    Electrical impedance spectroscopy usually requires a wide bandwidth current source with high output impedance. Non-idealities of the operational amplifier (op-amp) degrade its performance. This work presents a particle swarm algorithm for extracting the main AC characteristics of the op-amp used to design a mirrored modified Howland current source circuit which satisfies both the output current and the impedance spectra required. User specifications were accommodated. Both resistive and biological loads were used in the simulations. The results showed that the algorithm can correctly identify the open-loop gain and the input and output resistance of the op-amp which best fit the performance requirements of the circuit. It was also shown that the higher the open-loop gain corner frequency the higher the output impedance of the circuit. The algorithm could be a powerful tool for developing a desirable current source for different bioimpedance medical and clinical applications, such as cancer tissue characterisation and tissue cell measurements.

  3. Using a Genetic Algorithm to Design Nuclear Electric Spacecraft

    NASA Technical Reports Server (NTRS)

    Pannell, William P.

    2003-01-01

    The basic approach to to design nuclear electric spacecraft is to generate a group of candidate designs, see how "fit" the design are, and carry best design forward to the next generation. Some designs eliminated, some randomly modified and carried forward.

  4. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  5. Using patient-specific phantoms to evaluate deformable image registration algorithms for adaptive radiation therapy.

    PubMed

    Stanley, Nick; Glide-Hurst, Carri; Kim, Jinkoo; Adams, Jeffrey; Li, Shunshan; Wen, Ning; Chetty, Indrin J; Zhong, Hualiang

    2013-11-04

    The quality of adaptive treatment planning depends on the accuracy of its underlying deformable image registration (DIR). The purpose of this study is to evaluate the performance of two DIR algorithms, B-spline-based deformable multipass (DMP) and deformable demons (Demons), implemented in a commercial software package. Evaluations were conducted using both computational and physical deformable phantoms. Based on a finite element method (FEM), a total of 11 computational models were developed from a set of CT images acquired from four lung and one prostate cancer patients. FEM generated displacement vector fields (DVF) were used to construct the lung and prostate image phantoms. Based on a fast-Fourier transform technique, image noise power spectrum was incorporated into the prostate image phantoms to create simulated CBCT images. The FEM-DVF served as a gold standard for verification of the two registration algorithms performed on these phantoms. The registration algorithms were also evaluated at the homologous points quantified in the CT images of a physical lung phantom. The results indicated that the mean errors of the DMP algorithm were in the range of 1.0 ~ 3.1 mm for the computational phantoms and 1.9 mm for the physical lung phantom. For the computational prostate phantoms, the corresponding mean error was 1.0-1.9 mm in the prostate, 1.9-2.4mm in the rectum, and 1.8-2.1 mm over the entire patient body. Sinusoidal errors induced by B-spline interpolations were observed in all the displacement profiles of the DMP registrations. Regions of large displacements were observed to have more registration errors. Patient-specific FEM models have been developed to evaluate the DIR algorithms implemented in the commercial software package. It has been found that the accuracy of these algorithms is patient dependent and related to various factors including tissue deformation magnitudes and image intensity gradients across the regions of interest. This may suggest that

  6. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks †

    PubMed Central

    Imran, Muhammad; Zafar, Nazir Ahmad

    2012-01-01

    Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  7. Optimization of a Turboprop UAV for Maximum Loiter and Specific Power Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dinc, Ali

    2016-09-01

    In this study, a genuine code was developed for optimization of selected parameters of a turboprop engine for an unmanned aerial vehicle (UAV) by employing elitist genetic algorithm. First, preliminary sizing of a UAV and its turboprop engine was done, by the code in a given mission profile. Secondly, single and multi-objective optimization were done for selected engine parameters to maximize loiter duration of UAV or specific power of engine or both. In single objective optimization, as first case, UAV loiter time was improved with an increase of 17.5% from baseline in given boundaries or constraints of compressor pressure ratio and burner exit temperature. In second case, specific power was enhanced by 12.3% from baseline. In multi-objective optimization case, where previous two objectives are considered together, loiter time and specific power were increased by 14.2% and 9.7% from baseline respectively, for the same constraints.

  8. Thermoluminescence curves simulation using genetic algorithm with factorial design

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-05-01

    The evolutionary approach is an effective optimization tool for numeric analysis of thermoluminescence (TL) processes to assess the microparameters of kinetic models and to determine its effects on the shape of TL peaks. In this paper, the procedure for tuning of genetic algorithm (GA) is presented. This approach is based on multifactorial experiment and allows choosing intrinsic mechanisms of evolutionary operators which provide the most efficient algorithm performance. The proposed method is tested by considering the “one trap-one recombination center” (OTOR) model as an example and advantages for approximation of experimental TL curves are shown.

  9. Designing Domain-Specific HUMS Architectures: An Automated Approach

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Agarwal, Neha; Kumar, Pramod; Sundaram, Parthiban

    2004-01-01

    The HUMS automation system automates the design of HUMS architectures. The automated design process involves selection of solutions from a large space of designs as well as pure synthesis of designs. Hence the whole objective is to efficiently search for or synthesize designs or parts of designs in the database and to integrate them to form the entire system design. The automation system adopts two approaches in order to produce the designs: (a) Bottom-up approach and (b) Top down approach. Both the approaches are endowed with a Suite of quantitative and quantitative techniques that enable a) the selection of matching component instances, b) the determination of design parameters, c) the evaluation of candidate designs at component-level and at system-level, d) the performance of cost-benefit analyses, e) the performance of trade-off analyses, etc. In short, the automation system attempts to capitalize on the knowledge developed from years of experience in engineering, system design and operation of the HUMS systems in order to economically produce the most optimal and domain-specific designs.

  10. DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...

  11. High pressure humidification columns: Design equations, algorithm, and computer code

    SciTech Connect

    Enick, R.M.; Klara, S.M.; Marano, J.J.

    1994-07-01

    This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.

  12. Design specifications for manufacturability of MCM-C multichip modules

    SciTech Connect

    Allen, C.; Blazek, R.; Desch, J.; Elarton, J.; Kautz, D.; Markley, D.; Morgenstern, H.; Stewart, R.; Warner, L.

    1995-06-01

    The scope of this document is to establish design guidelines for electronic circuitry packaged as multichip modules of the ceramic substrate variety, although many of these guidelines are applicable to other types of multichip modules. The guidelines begin with prerequisite information which must be developed between customer and designer of the multichip module. The core of the guidelines focuses on the many considerations that must be addressed during the multichip module design. The guidelines conclude with the resulting deliverables from the design which satisfy customer requirements and/or support the multichip module fabrication and testing processes. Considerable supporting information, checklists, and design constraints are captured in specific appendices and used as reference information in the main body text. Finally some real examples of multichip module design are presented.

  13. A new training algorithm using artificial neural networks to classify gender-specific dynamic gait patterns.

    PubMed

    Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim

    2015-01-01

    The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.

  14. Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE

    PubMed Central

    Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.

    2009-01-01

    PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842

  15. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  16. Specification, Design, and Analysis of Advanced HUMS Architectures

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    2004-01-01

    During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the

  17. Algorithm Of Revitalization Programme Design For Housing Estates

    NASA Astrophysics Data System (ADS)

    Ostańska, Anna

    2015-09-01

    Demographic problems, obsolescence of existing buildings, unstable economy, as well as misunderstanding of the mechanism that turn city quarters into areas in need for intervention result in the implementation of improvement measures that prove inadequate. The paper puts forward an algorithm of revitalization program for housing developments and presents its implementation. It also showed the effects of periodically run (10 years) three-way diagnostic tests in correlation with the concept of settlement management.

  18. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be

  19. Design Genetic Algorithm Optimization Education Software Based Fuzzy Controller for a Tricopter Fly Path Planning

    ERIC Educational Resources Information Center

    Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao

    2016-01-01

    In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…

  20. Fuzzy logic algorithm to extract specific interaction forces from atomic force microscopy data

    NASA Astrophysics Data System (ADS)

    Kasas, Sandor; Riederer, Beat M.; Catsicas, Stefan; Cappella, Brunero; Dietler, Giovanni

    2000-05-01

    The atomic force microscope is not only a very convenient tool for studying the topography of different samples, but it can also be used to measure specific binding forces between molecules. For this purpose, one type of molecule is attached to the tip and the other one to the substrate. Approaching the tip to the substrate allows the molecules to bind together. Retracting the tip breaks the newly formed bond. The rupture of a specific bond appears in the force-distance curves as a spike from which the binding force can be deduced. In this article we present an algorithm to automatically process force-distance curves in order to obtain bond strength histograms. The algorithm is based on a fuzzy logic approach that permits an evaluation of "quality" for every event and makes the detection procedure much faster compared to a manual selection. In this article, the software has been applied to measure the binding strength between tubuline and microtubuline associated proteins.

  1. Experiences with the hydraulic design of the high specific speed Francis turbine

    NASA Astrophysics Data System (ADS)

    Obrovsky, J.; Zouhar, J.

    2014-03-01

    The high specific speed Francis turbine is still suitable alternative for refurbishment of older hydro power plants with lower heads and worse cavitation conditions. In the paper the design process of such kind of turbine together with the results comparison of homological model tests performed in hydraulic laboratory of ČKD Blansko Engineering is introduced. The turbine runner was designed using the optimization algorithm and considering the high specific speed hydraulic profile. It means that hydraulic profiles of the spiral case, the distributor and the draft tube were used from a Kaplan turbine. The optimization was done as the automatic cycle and was based on a simplex optimization method as well as on a genetic algorithm. The number of blades is shown as the parameter which changes the resulting specific speed of the turbine between ns=425 to 455 together with the cavitation characteristics. Minimizing of cavitation on the blade surface as well as on the inlet edge of the runner blade was taken into account during the design process. The results of CFD analyses as well as the model tests are mentioned in the paper.

  2. NASA software specification and evaluation system design, part 2

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A survey and analysis of the existing methods, tools and techniques employed in the development of software are presented along with recommendations for the construction of reliable software. Functional designs for software specification language, and the data base verifier are presented.

  3. An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1999-01-01

    The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.

  4. INCORPORATING ENVIRONMENTAL AND ECONOMIC CONSIDERATIONS INTO PROCESS DESIGN: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...

  5. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    PubMed Central

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-01-01

    The purpose of this study was to investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for 7 disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head & neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and Monte Carlo algorithms to obtain the average range differences (ARD) and root mean square deviation (RMSD) for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation (ADD) of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing Monte Carlo dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head & neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head & neck treatments. We conclude that currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific

  6. A design guide and specification for small explosive containment structures

    SciTech Connect

    Marchand, K.A.; Cox, P.A.; Polcyn, M.A.

    1994-12-01

    The design of structural containments for testing small explosive devices requires the designer to consider the various aspects of the explosive loading, i.e., shock and gas or quasistatic pressure. Additionally, if the explosive charge has the potential of producing damaging fragments, provisions must be made to arrest the fragments. This may require that the explosive be packed in a fragment attenuating material, which also will affect the loads predicted for containment response. Material also may be added just to attenuate shock, in the absence of fragments. Three charge weights are used in the design. The actual charge is used to determine a design fragment. Blast loads are determined for a {open_quotes}design charge{close_quotes}, defined as 125% of the operational charge in the explosive device. No yielding is permitted at the design charge weight. Blast loads are also determined for an over-charge, defined as 200% of the operational charge in the explosive device. Yielding, but no failure, is permitted at this over-charge. This guide emphasizes the calculation of loads and fragments for which the containment must be designed. The designer has the option of using simplified or complex design-analysis methods. Examples in the guide use readily available single degree-of-freedom (sdof) methods, plus static methods for equivalent dynamic loads. These are the common methods for blast resistant design. Some discussion of more complex methods is included. Generally, the designer who chooses more complex methods must be fully knowledgeable in their use and limitations. Finally, newly fabricated containments initially must be proof tested to 125% of the operational load and then inspected at regular intervals. This specification provides guidance for design, proof testing, and inspection of small explosive containment structures.

  7. Evaluation of a segmentation algorithm designed for an FPGA implementation

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Schönermark, Maria; Huber, Felix

    2013-10-01

    The present work has to be seen in the context of real-time on-board image evaluation of optical satellite data. With on board image evaluation more useful data can be acquired, the time to get requested information can be decreased and new real-time applications are possible. Because of the relative high processing power in comparison to the low power consumption, Field Programmable Gate Array (FPGA) technology has been chosen as an adequate hardware platform for image processing tasks. One fundamental part for image evaluation is image segmentation. It is a basic tool to extract spatial image information which is very important for many applications such as object detection. Therefore a special segmentation algorithm using the advantages of FPGA technology has been developed. The aim of this work is the evaluation of this algorithm. Segmentation evaluation is a difficult task. The most common way for evaluating the performance of a segmentation method is still subjective evaluation, in which human experts determine the quality of a segmentation. This way is not in compliance with our needs. The evaluation process has to provide a reasonable quality assessment, should be objective, easy to interpret and simple to execute. To reach these requirements a so called Segmentation Accuracy Equality norm (SA EQ) was created, which compares the difference of two segmentation results. It can be shown that this norm is capable as a first quality measurement. Due to its objectivity and simplicity the algorithm has been tested on a specially chosen synthetic test model. In this work the most important results of the quality assessment will be presented.

  8. Design of Protein Multi-specificity Using an Independent Sequence Search Reduces the Barrier to Low Energy Sequences.

    PubMed

    Sevy, Alexander M; Jacobs, Tim M; Crowe, James E; Meiler, Jens

    2015-07-01

    Computational protein design has found great success in engineering proteins for thermodynamic stability, binding specificity, or enzymatic activity in a 'single state' design (SSD) paradigm. Multi-specificity design (MSD), on the other hand, involves considering the stability of multiple protein states simultaneously. We have developed a novel MSD algorithm, which we refer to as REstrained CONvergence in multi-specificity design (RECON). The algorithm allows each state to adopt its own sequence throughout the design process rather than enforcing a single sequence on all states. Convergence to a single sequence is encouraged through an incrementally increasing convergence restraint for corresponding positions. Compared to MSD algorithms that enforce (constrain) an identical sequence on all states the energy landscape is simplified, which accelerates the search drastically. As a result, RECON can readily be used in simulations with a flexible protein backbone. We have benchmarked RECON on two design tasks. First, we designed antibodies derived from a common germline gene against their diverse targets to assess recovery of the germline, polyspecific sequence. Second, we design "promiscuous", polyspecific proteins against all binding partners and measure recovery of the native sequence. We show that RECON is able to efficiently recover native-like, biologically relevant sequences in this diverse set of protein complexes. PMID:26147100

  9. An effective algorithm for the generation of patient-specific Purkinje networks in computational electrocardiology

    NASA Astrophysics Data System (ADS)

    Palamara, Simone; Vergara, Christian; Faggiano, Elena; Nobile, Fabio

    2015-02-01

    The Purkinje network is responsible for the fast and coordinated distribution of the electrical impulse in the ventricle that triggers its contraction. Therefore, it is necessary to model its presence to obtain an accurate patient-specific model of the ventricular electrical activation. In this paper, we present an efficient algorithm for the generation of a patient-specific Purkinje network, driven by measures of the electrical activation acquired on the endocardium. The proposed method provides a correction of an initial network, generated by means of a fractal law, and it is based on the solution of Eikonal problems both in the muscle and in the Purkinje network. We present several numerical results both in an ideal geometry with synthetic data and in a real geometry with patient-specific clinical measures. These results highlight an improvement of the accuracy provided by the patient-specific Purkinje network with respect to the initial one. In particular, a cross-validation test shows an accuracy increase of 19% when only the 3% of the total points are used to generate the network, whereas an increment of 44% is observed when a random noise equal to 20% of the maximum value of the clinical data is added to the measures.

  10. Design considerations for flight test of a fault inferring nonlinear detection system algorithm for avionics sensors

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.

    1986-01-01

    The modifications to the design of a fault inferring nonlinear detection system (FINDS) algorithm to accommodate flight computer constraints and the resulting impact on the algorithm performance are summarized. An overview of the flight data-driven FINDS algorithm is presented. This is followed by a brief analysis of the effects of modifications to the algorithm on program size and execution speed. Significant improvements in estimation performance for the aircraft states and normal operating sensor biases, which have resulted from improved noise design parameters and a new steady-state wind model, are documented. The aircraft state and sensor bias estimation performances of the algorithm's extended Kalman filter are presented as a function of update frequency of the piecewise constant filter gains. The results of a new detection system strategy and failure detection performance, as a function of gain update frequency, are also presented.

  11. Adaptive randomized algorithms for analysis and design of control systems under uncertain environments

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia

    2015-05-01

    We consider the general problem of analysis and design of control systems in the presence of uncertainties. We treat uncertainties that affect a control system as random variables. The performance of the system is measured by the expectation of some derived random variables, which are typically bounded. We develop adaptive sequential randomized algorithms for estimating and optimizing the expectation of such bounded random variables with guaranteed accuracy and confidence level. These algorithms can be applied to overcome the conservatism and computational complexity in the analysis and design of controllers to be used in uncertain environments. We develop methods for investigating the optimality and computational complexity of such algorithms.

  12. Highly specific protein-protein interactions, evolution and negative design.

    PubMed

    Sear, Richard P

    2004-12-01

    We consider highly specific protein-protein interactions in proteomes of simple model proteins. We are inspired by the work of Zarrinpar et al (2003 Nature 426 676). They took a binding domain in a signalling pathway in yeast and replaced it with domains of the same class but from different organisms. They found that the probability of a protein binding to a protein from the proteome of a different organism is rather high, around one half. We calculate the probability of a model protein from one proteome binding to the protein of a different proteome. These proteomes are obtained by sampling the space of functional proteomes uniformly. In agreement with Zarrinpar et al we find that the probability of a protein binding a protein from another proteome is rather high, of order one tenth. Our results, together with those of Zarrinpar et al, suggest that designing, say, a peptide to block or reconstitute a single signalling pathway, without affecting any other pathways, requires knowledge of all the partners of the class of binding domains the peptide is designed to mimic. This knowledge is required to use negative design to explicitly design out interactions of the peptide with proteins other than its target. We also found that patches that are required to bind with high specificity evolve more slowly than those that are required only to not bind to any other patch. This is consistent with some analysis of sequence data for proteins engaged in highly specific interactions.

  13. On the importance of FIB-SEM specific segmentation algorithms for porous media

    SciTech Connect

    Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  14. Optimisation of the design of shell and double concentric tubes heat exchanger using the Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Baadache, Khireddine; Bougriou, Chérif

    2015-10-01

    This paper presents the use of Genetic Algorithm in the sizing of the shell and double concentric tube heat exchanger where the objective function is the total cost which is the sum of the capital cost of the device and the operating cost. The use of the techno-economic methods based on the optimisation methods of heat exchangers sizing allow to have a device that satisfies the technical specification with the lowest possible levels of operating and investment costs. The logarithmic mean temperature difference method was used for the calculation of the heat exchange area. This new heat exchanger is more profitable and more economic than the old heat exchanger, the total cost decreased of about 13.16 % what represents 7,250.8 euro of the lump sum. The design modifications and the use of the Genetic Algorithm for the sizing also allow to improve the compactness of the heat exchanger, the study showed that the latter can increase the heat transfer surface area per unit volume until 340 m2/m3.

  15. Design of Clinical Support Systems Using Integrated Genetic Algorithm and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Fu; Huang, Yung-Fa; Jiang, Xiaoyi; Hsu, Yuan-Nian; Lin, Hsuan-Hung

    Clinical decision support system (CDSS) provides knowledge and specific information for clinicians to enhance diagnostic efficiency and improving healthcare quality. An appropriate CDSS can highly elevate patient safety, improve healthcare quality, and increase cost-effectiveness. Support vector machine (SVM) is believed to be superior to traditional statistical and neural network classifiers. However, it is critical to determine suitable combination of SVM parameters regarding classification performance. Genetic algorithm (GA) can find optimal solution within an acceptable time, and is faster than greedy algorithm with exhaustive searching strategy. By taking the advantage of GA in quickly selecting the salient features and adjusting SVM parameters, a method using integrated GA and SVM (IGS), which is different from the traditional method with GA used for feature selection and SVM for classification, was used to design CDSSs for prediction of successful ventilation weaning, diagnosis of patients with severe obstructive sleep apnea, and discrimination of different cell types form Pap smear. The results show that IGS is better than methods using SVM alone or linear discriminator.

  16. A Pareto Optimal Design Analysis of Magnetic Thrust Bearings Using Multi-Objective Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Rao, Jagu S.; Tiwari, R.

    2015-03-01

    A Pareto optimal design analysis is carried out on the design of magnetic thrust bearings using multi-objective genetic algorithms. Two configurations of bearings have been considered with the minimization of power loss and weight of the bearing as objectives for performance comparisons. A multi-objective evolutionary algorithm is utilized to generate Pareto frontiers at different operating loads. As the load increases, the Pareto frontier reduces to a single point at a peak load for both configurations. Pareto optimal design analysis is used to study characteristics of design variables and other parameters. Three distinct operating load zones have been observed.

  17. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell has been designed and tested to deliver high capacity at a C/1.5 discharge rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet made at a discharge rate this high in the 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters, performance, and future test plans are described.

  18. Space tug thermal control. [design criteria and specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    It was determined that space tug will require the capability to perform its mission within a broad range of thermal environments with currently planned mission durations of up to seven days, so an investigation was conducted to define a thermal design for the forward and intertank compartments and fuel cell heat rejection system that satisfies tug requirements for low inclination geosynchronous deploy and retrieve missions. Passive concepts were demonstrated analytically for both the forward and intertank compartments, and a worst case external heating environment was determined for use during the study. The thermal control system specifications and designs which resulted from the research are shown.

  19. Specific issues of the design for the elderly

    NASA Astrophysics Data System (ADS)

    Sebesi, S. B.; Groza, H. L.; Ianoşi, A.; Dimitrova, A.; Mândru, D.

    2016-08-01

    The actual demographic studies show that number of the elderly people is increasing constantly. Considering their motor, sensorial and cognitive constrains and restrictions, a new field of Assistive Technology is developing, focussed on the design and development of a wide range of devices, apparatus, equipment and systems dedicated to their independent and safe life. In this paper, a systematisation of existing gero-technical systems is proposed, emphasising the today trends in this field. The specific issues of designing this kind of products are identified and analysed. Two approaches of the authors are finally presented: wearable suits for aging and disabilities simulation and tele-monitoring of the elderly.

  20. Computational model design specification for Phase 1 of the Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.

    1991-07-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emission from nuclear operations at Hanford since their inception in 1944. The purpose of this report is to outline the basic algorithm and necessary computer calculations to be used to calculate radiation doses specific and hypothetical individuals in the vicinity of Hanford. The system design requirements, those things that must be accomplished, are defined. The system design specifications, the techniques by which those requirements are met, are outlined. Included are the basic equations, logic diagrams, and preliminary definition of the nature of each input distribution. 4 refs., 10 figs., 9 tabs.

  1. Vision-based vehicle detection and tracking algorithm design

    NASA Astrophysics Data System (ADS)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  2. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  3. EvoOligo: oligonucleotide probe design with multiobjective evolutionary algorithms.

    PubMed

    Shin, Soo-Yong; Lee, In-Hee; Cho, Young-Min; Yang, Kyung-Ae; Zhang, Byoung-Tak

    2009-12-01

    Probe design is one of the most important tasks in successful deoxyribonucleic acid microarray experiments. We propose a multiobjective evolutionary optimization method for oligonucleotide probe design based on the multiobjective nature of the probe design problem. The proposed multiobjective evolutionary approach has several distinguished features, compared with previous methods. First, the evolutionary approach can find better probe sets than existing simple filtering methods with fixed threshold values. Second, the multiobjective approach can easily incorporate the user's custom criteria or change the existing criteria. Third, our approach tries to optimize the combination of probes for the given set of genes, in contrast to other tools that independently search each gene for qualifying probes. Lastly, the multiobjective optimization method provides various sets of probe combinations, among which the user can choose, depending on the target application. The proposed method is implemented as a platform called EvoOligo and is available for service on the web. We test the performance of EvoOligo by designing probe sets for 19 types of Human Papillomavirus and 52 genes in the Arabidopsis Calmodulin multigene family. The design results from EvoOligo are proven to be superior to those from well-known existing probe design tools, such as OligoArray and OligoWiz.

  4. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: sensitivity and specificity analysis.

    PubMed

    Kapp, Eugene A; Schütz, Frédéric; Connolly, Lisa M; Chakel, John A; Meza, Jose E; Miller, Christine A; Fenyo, David; Eng, Jimmy K; Adkins, Joshua N; Omenn, Gilbert S; Simpson, Richard J

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X!Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, PeptideProphet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X!Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of "consensus scoring", i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  5. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  6. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Irwin, Ryan W.; Tinker, Michael L.

    2005-02-01

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  7. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    SciTech Connect

    Irwin, Ryan W.; Tinker, Michael L.

    2005-02-06

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  8. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Irwin, Ryan W.; Tinker, Michael L.

    2005-01-01

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  9. Precise Specification of Design Pattern Structure and Behaviour

    NASA Astrophysics Data System (ADS)

    Sterritt, Ashley; Clarke, Siobhán; Cahill, Vinny

    Applying design patterns while developing a software system can improve its non-functional properties, such as extensibility and loose coupling. Precise specification of structure and behaviour communicates the invariants imposed by a pattern on a conforming implementation and enables formal software verification. Many existing design-pattern specification languages (DPSLs) focus on class structure alone, while those that do address behaviour suffer from a lack of expressiveness and/or imprecise semantics. In particular, in a review of existing work, three invariant categories were found to be inexpressible in state-of-the-art DPSLs: dependency, object state and data-structure. This paper presents Alas: a precise specification language that supports design-pattern descriptions including these invariant categories. The language is based on UML Class and Sequence diagrams with modified syntax and semantics. In this paper, the meaning of the presented invariants is formalized and relevant ambiguities in the UML Standard are clarified. We have evaluated Alas by specifying the widely-used Gang of Four pattern catalog and identified patterns that benefitted from the added expressiveness and semantics of Alas.

  10. Novel designed magnetic leakage testing sensor with GMR for image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Sasamoto, Akira; Suzuki, Takayuki

    2012-04-01

    Authors had developed an image reconstruction algorithm that can accurately reconstruct images of flaws from data obtained using conventional ECT sensors few years ago. The developed reconstruction algorithm is designed for data which is assumed to be obtained with spatial uniform magnetic field on the target surface. On the other hand, the conventional ECT sensor author used is designed in such a manner that when the magnetic field is imposed on the target surface, the strength of the magnetic field is maximized. This violation of the assumption ruins the algorithm simplicity because it needs to employ complemental response functions called"LSF"for long line flaw which is not along original algorithm design.In order to obtain an experimental result which proves the validity of original algorithm with only one response function, the authors have developed a prototype sensor for magnetic flux leakage testing that satisfy the requirement of original algorithm last year. The developed sensor comprises a GMR magnetic field sensor to detect a static magnetic field and two magnets adjacent to the GMR sensor to magnetize the target specimen. However, obtained data had insufficient accuracy due to weakness of the strength of the magnet. Therefore author redesigned it with much stronger magnet this year. Obtained data with this new sensor shows that the algorithm is most likely to work well with only one response function for this type probe.

  11. Quantitative Comparison of Minimum Inductance and Minimum Power Algorithms for the Design of Shim Coils for Small Animal Imaging.

    PubMed

    Hudson, Parisa; Hudson, Stephen D; Handler, William B; Scholl, Timothy J; Chronik, Blaine A

    2010-04-01

    High-performance shim coils are required for high-field magnetic resonance imaging and spectroscopy. Complete sets of high-power and high-performance shim coils were designed using two different methods: the minimum inductance and the minimum power target field methods. A quantitative comparison of shim performance in terms of merit of inductance (ML) and merit of resistance (MR) was made for shim coils designed using the minimum inductance and the minimum power design algorithms. In each design case, the difference in ML and the difference in MR given by the two design methods was <15%. Comparison of wire patterns obtained using the two design algorithms show that minimum inductance designs tend to feature oscillations within the current density; while minimum power designs tend to feature less rapidly varying current densities and lower power dissipation. Overall, the differences in coil performance obtained by the two methods are relatively small. For the specific case of shim systems customized for small animal imaging, the reduced power dissipation obtained when using the minimum power method is judged to be more significant than the improvements in switching speed obtained from the minimum inductance method.

  12. A Computer Environment for Beginners' Learning of Sorting Algorithms: Design and Pilot Evaluation

    ERIC Educational Resources Information Center

    Kordaki, M.; Miatidis, M.; Kapsampelis, G.

    2008-01-01

    This paper presents the design, features and pilot evaluation study of a web-based environment--the SORTING environment--for the learning of sorting algorithms by secondary level education students. The design of this environment is based on modeling methodology, taking into account modern constructivist and social theories of learning while at…

  13. Sensitivity of snow density and specific surface area measured by microtomography to different image processing algorithms

    NASA Astrophysics Data System (ADS)

    Hagenmuller, Pascal; Matzl, Margret; Chambon, Guillaume; Schneebeli, Martin

    2016-05-01

    Microtomography can measure the X-ray attenuation coefficient in a 3-D volume of snow with a spatial resolution of a few microns. In order to extract quantitative characteristics of the microstructure, such as the specific surface area (SSA), from these data, the greyscale image first needs to be segmented into a binary image of ice and air. Different numerical algorithms can then be used to compute the surface area of the binary image. In this paper, we report on the effect of commonly used segmentation and surface area computation techniques on the evaluation of density and specific surface area. The evaluation is based on a set of 38 X-ray tomographies of different snow samples without impregnation, scanned with an effective voxel size of 10 and 18 μm. We found that different surface area computation methods can induce relative variations up to 5 % in the density and SSA values. Regarding segmentation, similar results were obtained by sequential and energy-based approaches, provided the associated parameters were correctly chosen. The voxel size also appears to affect the values of density and SSA, but because images with the higher resolution also show the higher noise level, it was not possible to draw a definitive conclusion on this effect of resolution.

  14. On Polymorphic Circuits and Their Design Using Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.

  15. Longitudinal Algorithms to Estimate Cardiorespiratory Fitness: Associations with Nonfatal Cardiovascular Disease and Disease-Specific Mortality

    PubMed Central

    Artero, Enrique G.; Jackson, Andrew S.; Sui, Xuemei; Lee, Duck-chul; O’Connor, Daniel P.; Lavie, Carl J.; Church, Timothy S.; Blair, Steven N.

    2014-01-01

    Objective To predict risk for non-fatal cardiovascular disease (CVD) and disease-specific mortality using CRF algorithms that do not involve exercise testing. Background Cardiorespiratory fitness (CRF) is not routinely measured, as it requires trained personnel and specialized equipment. Methods Participants were 43,356 adults (21% women) from the Aerobics Center Longitudinal Study followed between 1974 and 2003. Estimated CRF was based on sex, age, body mass index, waist circumference, resting heart rate, physical activity level and smoking status. Actual CRF was measured by a maximal treadmill test. Results During a median follow-up of 14.5 years, 1,934 deaths occurred, 627 due to CVD. In a sub-sample of 18,095 participants, 1,049 cases of non-fatal CVD events were ascertained. After adjusting for potential confounders, both measured CRF and estimated CRF were inversely associated with risk of all-cause mortality, CVD mortality and non-fatal CVD incidence in men, and with all-cause mortality and non-fatal CVD in women. The risk reduction per 1-metabolic equivalent (MET) increase ranged approximately from 10 to 20 %. Measured CRF had a slightly better discriminative ability (c-statistic) than estimated CRF, and the net reclassification improvement (NRI) of measured CRF vs. estimated CRF was 12.3% in men (p<0.05) and 19.8% in women (p<0.001). Conclusions These algorithms utilize information routinely collected to obtain an estimate of CRF that provides a valid indication of health status. In addition to identifying people at risk, this method can provide more appropriate exercise recommendations that reflect initial CRF levels. PMID:24703924

  16. Flexible Mx specification of various extended twin kinship designs.

    PubMed

    Maes, Hermine H; Neale, Michael C; Medland, Sarah E; Keller, Matthew C; Martin, Nicholas G; Heath, Andrew C; Eaves, Lindon J

    2009-02-01

    The extended twin kinship design allows the simultaneous testing of additive and nonadditive genetic, shared and individual-specific environmental factors, as well as sex differences in the expression of genes and environment in the presence of assortative mating and combined genetic and cultural transmission (Eaves et al., 1999). It also handles the contribution of these sources of variance to the (co)variation of multiple phenotypes. Keller et al. (2008) extended this comprehensive model for family resemblance to allow or a flexible specification of assortment and vertical transmission. As such, it provides a general framework which can easily be reduced to fit subsets of data such as twin-parent data, children-of-twins data, etc. A flexible Mx specification of this model that allows handling of these various designs is presented in detail and applied to data from the Virginia 30,000. Data on height, body mass index, smoking status, church attendance, and political affiliation were obtained from twins and their families. Results indicate that biases in the estimation of variance components depend both on the types of relative available for analysis, and on the underlying genetic and environmental architecture of the phenotype of interest.

  17. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm

    PubMed Central

    Chang, Wei-Der

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168

  18. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  19. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  20. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  1. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  2. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  3. Field Programmable Gate Array Based Parallel Strapdown Algorithm Design for Strapdown Inertial Navigation Systems

    PubMed Central

    Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua

    2011-01-01

    A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058

  4. A matching algorithm for catalytic residue site selection in computational enzyme design.

    PubMed

    Lei, Yulin; Luo, Wenjia; Zhu, Yushan

    2011-09-01

    A loop closure-based sequential algorithm, PRODA_MATCH, was developed to match catalytic residues onto a scaffold for enzyme design in silico. The computational complexity of this algorithm is polynomial with respect to the number of active sites, the number of catalytic residues, and the maximal iteration number of cyclic coordinate descent steps. This matching algorithm is independent of a rotamer library that enables the catalytic residue to take any required conformation during the reaction coordinate. The catalytic geometric parameters defined between functional groups of transition state (TS) and the catalytic residues are continuously optimized to identify the accurate position of the TS. Pseudo-spheres are introduced for surrounding residues, which make the algorithm take binding into account as early as during the matching process. Recapitulation of native catalytic residue sites was used as a benchmark to evaluate the novel algorithm. The calculation results for the test set show that the native catalytic residue sites were successfully identified and ranked within the top 10 designs for 7 of the 10 chemical reactions. This indicates that the matching algorithm has the potential to be used for designing industrial enzymes for desired reactions.

  5. Algorithm Design on Network Game of Chinese Chess

    NASA Astrophysics Data System (ADS)

    Xianmei, Fang

    This paper describes the current situation of domestic network game. Contact the present condition of the local network game currently, we inquired to face to a multithread tcp client and server, such as Chinese chess, according to the information, and study the contents and meanings. Combining the Java of basic knowledge, the article study the compiling procedure facing to the object according to the information in Java Swing usage, and the method of the network procedure. The article researched the method and processes of the network procedure carry on the use of Sprocket under the Java Swing. Understood the basic process of compiling procedure using Java and how to compile a network procedure. The most importance is how a pair of machines correspondence-C/S the service system-is carried out. From here, we put forward the data structure,the basic calculate way of the network game- Chinese chess, and how to design and realize the server and client of that procedure. The online games -- chess design can be divided into several modules as follows: server module, client module and the control module.

  6. The GLAS Science Algorithm Software (GSAS) Detailed Design Document Version 6. Volume 16

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document describes the detailed design of GLAS Science Algorithm Software (GSAS). The GSAS is used to create the ICESat GLAS standard data products. The National Snow and Ice Data Center (NSDIC) distribute these products. The document contains descriptions, flow charts, data flow diagrams, and structure charts for each major component of the GSAS. The purpose of this document is to present the detailed design of the GSAS. It is intended as a reference source to assist the maintenance programmer in making changes that fix or enhance the documented software.

  7. Rational Design of Antirheumatic Prodrugs Specific for Sites of Inflammation

    PubMed Central

    Onuoha, Shimobi C.; Ferrari, Mathieu; Sblattero, Daniele

    2015-01-01

    Objective Biologic drugs, such as the anti–tumor necrosis factor (anti‐TNF) antibody adalimumab, have represented a breakthrough in the treatment of rheumatoid arthritis. Yet, concerns remain over their lack of efficacy in a sizable proportion of patients and their potential for systemic side effects such as infection. Improved biologic prodrugs specifically targeted to the site of inflammation have the potential to alleviate current concerns surrounding biologic anticytokine therapies. The purpose of this study was to design, construct, and evaluate in vitro and ex vivo the targeting and antiinflammatory capacity of activatable bispecific antibodies. Methods Activatable dual variable domain (aDVD) antibodies were designed and constructed to target intercellular adhesion molecule 1 (ICAM‐1), which is up‐regulated at sites of inflammation, and anti‐TNF antibodies (adalimumab and infliximab). These bispecific molecules included an external arm that targets ICAM‐1 and an internal arm that comprises the therapeutic domain of an anti‐TNF antibody. Both arms were linked to matrix metalloproteinase (MMP)–cleavable linkers. The constructs were tested for their ability to bind and neutralize both in vitro and ex vivo targets. Results Intact aDVD constructs demonstrated significantly reduced binding and anti‐TNF activity in the prodrug formulation as compared to the parent antibodies. Human synovial fluid and physiologic concentrations of MMP enzyme were capable of cleaving the external domain of the antibody, revealing a fully active molecule. Activated antibodies retained the same binding and anti‐TNF inhibitory capacities as the parent molecules. Conclusion The design of a biologic prodrug with enhanced specificity for sites of inflammation (synovium) and reduced specificity for off‐target TNF is described. This construct has the potential to form a platform technology that is capable of enhancing the therapeutic index of drugs for the treatment of

  8. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell was designed and tested to deliver high capacity at steady discharge rates up to and including a C rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet of any type in a 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters and performance are described. Also covered is an episode of capacity fading due to electrode swelling and its successful recovery by means of additional activation procedures.

  9. Advanced Wet Tantalum Capacitors: Design, Specifications and Performance

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2016-01-01

    Insertion of new types of commercial, high volumetric efficiency wet tantalum capacitors in space systems requires reassessment of the existing quality assurance approaches that have been developed for capacitors manufactured to MIL-PRF-39006 requirements. The specifics of wet electrolytic capacitors is that leakage currents flowing through electrolyte can cause gas generation resulting in building up of internal gas pressure and rupture of the case. The risk associated with excessive leakage currents and increased pressure is greater for high value advanced wet tantalum capacitors, but it has not been properly evaluated yet. This presentation gives a review of specifics of the design, performance, and potential reliability risks associated with advanced wet tantalum capacitors. Problems related to setting adequate requirements for DPA, leakage currents, hermeticity, stability at low and high temperatures, ripple currents for parts operating in vacuum, and random vibration testing are discussed. Recommendations for screening and qualification to reduce risks of failures have been suggested.

  10. Design and Evaluation of Tumor-Specific Dendrimer Epigenetic Therapeutics

    PubMed Central

    Zong, Hong; Shah, Dhavan; Selwa, Katherine; Tsuchida, Ryan E; Rattan, Rahul; Mohan, Jay; Stein, Adam B; Otis, James B; Goonewardena, Sascha N

    2015-01-01

    Histone deacetylase inhibitors (HDACi) are promising therapeutics for cancer. HDACi alter the epigenetic state of tumors and provide a unique approach to treat cancer. Although studies with HDACi have shown promise in some cancers, variable efficacy and off-target effects have limited their use. To overcome some of the challenges of traditional HDACi, we sought to use a tumor-specific dendrimer scaffold to deliver HDACi directly to cancer cells. Here we report the design and evaluation of tumor-specific dendrimer–HDACi conjugates. The HDACi was conjugated to the dendrimer using an ester linkage through its hydroxamic acid group, inactivating the HDACi until it is released from the dendrimer. Using a cancer cell model, we demonstrate the functionality of the tumor-specific dendrimer–HDACi conjugates. Furthermore, we demonstrate that unlike traditional HDACi, dendrimer–HDACi conjugates do not affect tumor-associated macrophages, a recently recognized mechanism through which drug resistance emerges. We anticipate that this new class of cell-specific epigenetic therapeutics will have tremendous potential in the treatment of cancer. PMID:26246996

  11. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  12. Tissue-specific designs of stem cell hierarchies.

    PubMed

    Visvader, Jane E; Clevers, Hans

    2016-04-01

    Recent work in the field of stem cell biology suggests that there is no single design for an adult tissue stem cell hierarchy, and that different tissues employ distinct strategies to meet their self-renewal and repair requirements. Stem cells may be multipotent or unipotent, and can exist in quiescent or actively dividing states. 'Professional' stem cells may also co-exist with facultative stem cells, which are more specialized daughter cells that revert to a stem cell state under specific tissue damage conditions. Here, we discuss stem cell strategies as seen in three solid mammalian tissues: the intestine, mammary gland and skeletal muscle. PMID:26999737

  13. Modular Integrated Stackable Layers (MISL) 1.1 Design Specification. Design Guideline Document

    NASA Technical Reports Server (NTRS)

    Yim, Hester J.

    2012-01-01

    This document establishes the design guideline of the Modular Instrumentation Data Acquisition (MI-DAQ) system in utilization of several designs available in EV. The MI- DAQ provides the options to the customers depending on their system requirements i.e. a 28V interface power supply, a low power battery operated system, a low power microcontroller, a higher performance microcontroller, a USB interface, a Ethernet interface, a wireless communication, various sensor interfaces, etc. Depending on customer's requirements, the each functional board can be stacked up from a bottom level of power supply to a higher level of stack to provide user interfaces. The stack up of boards are accomplished by a predefined and standardized power bus and data bus connections which are included in this document along with other physical and electrical guidelines. This guideline also provides information for a new design options. This specification is the product of a collaboration between NASA/JSC/EV and Texas A&M University. The goal of the collaboration is to open source the specification and allow outside entities to design, build, and market modules that are compatible with the specification. NASA has designed and is using numerous modules that are compatible to this specification. A limited number of these modules will also be released as open source designs to support the collaboration. The released designs are listed in the Applicable Documents.

  14. Design and Implementation of an On-Chip Patient-Specific Closed-Loop Seizure Onset and Termination Detection System.

    PubMed

    Zhang, Chen; Bin Altaf, Muhammad Awais; Yoo, Jerald

    2016-07-01

    This paper presents the design of an area- and energy-efficient closed-loop machine learning-based patient-specific seizure onset and termination detection algorithm, and its on-chip hardware implementation. Application- and scenario-based tradeoffs are compared and reviewed for seizure detection and suppression algorithm and system which comprises electroencephalography (EEG) data acquisition, feature extraction, classification, and stimulation. Support vector machine achieves a good tradeoff among power, area, patient specificity, latency, and classification accuracy for long-term monitoring of patients with limited training seizure patterns. Design challenges of EEG data acquisition on a multichannel wearable environment for a patch-type sensor are also discussed in detail. Dual-detector architecture incorporates two area-efficient linear support vector machine classifiers along with a weight-and-average algorithm to target high sensitivity and good specificity at once. On-chip implementation issues for a patient-specific transcranial electrical stimulation are also discussed. The system design is verified using CHB-MIT EEG database [1] with a comprehensive measurement criteria which achieves high sensitivity and specificity of 95.1% and 96.2%, respectively, with a small latency of 1 s. It also achieves seizure onset and termination detection delay of 2.98 and 3.82 s, respectively, with seizure length estimation error of 4.07 s. PMID:27093712

  15. Evolutionary algorithm for the neutrino factory front end design

    SciTech Connect

    Poklonskiy, Alexey A.; Neuffer, David; /Fermilab

    2009-01-01

    The Neutrino Factory is an important tool in the long-term neutrino physics program. Substantial effort is put internationally into designing this facility in order to achieve desired performance within the allotted budget. This accelerator is a secondary beam machine: neutrinos are produced by means of the decay of muons. Muons, in turn, are produced by the decay of pions, produced by hitting the target by a beam of accelerated protons suitable for acceleration. Due to the physics of this process, extra conditioning of the pion beam coming from the target is needed in order to effectively perform subsequent acceleration. The subsystem of the Neutrino Factory that performs this conditioning is called Front End, its main performance characteristic is the number of the produced muons.

  16. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  17. Nuclear Electric Vehicle Optimization Toolset (NEVOT): Integrated System Design Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg

    2003-01-01

    The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.

  18. Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft

    PubMed Central

    Ning, Xin; Yuan, Jianping; Yue, Xiaokui

    2016-01-01

    A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755

  19. Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft.

    PubMed

    Ning, Xin; Yuan, Jianping; Yue, Xiaokui

    2016-01-01

    A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755

  20. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  1. A drug-specific nanocarrier design for efficient anticancer therapy

    NASA Astrophysics Data System (ADS)

    Shi, Changying; Guo, Dandan; Xiao, Kai; Wang, Xu; Wang, Lili; Luo, Juntao

    2015-07-01

    The drug-loading properties of nanocarriers depend on the chemical structures and properties of their building blocks. Here we customize telodendrimers (linear dendritic copolymer) to design a nanocarrier with improved in vivo drug delivery characteristics. We do a virtual screen of a library of small molecules to identify the optimal building blocks for precise telodendrimer synthesis using peptide chemistry. With rationally designed telodendrimer architectures, we then optimize the drug-binding affinity of a nanocarrier by introducing an optimal drug-binding molecule (DBM) without sacrificing the stability of the nanocarrier. To validate the computational predictions, we synthesize a series of nanocarriers and evaluate systematically for doxorubicin delivery. Rhein-containing nanocarriers have sustained drug release, prolonged circulation, increased tolerated dose, reduced toxicity, effective tumour targeting and superior anticancer effects owing to favourable doxorubicin-binding affinity and improved nanoparticle stability. This study demonstrates the feasibility and versatility of the de novo design of telodendrimer nanocarriers for specific drug molecules, which is a promising approach to transform nanocarrier development for drug delivery.

  2. A drug-specific nanocarrier design for efficient anticancer therapy

    PubMed Central

    Shi, Changying; Guo, Dandan; Xiao, Kai; Wang, Xu; Wang, Lili; Luo, Juntao

    2015-01-01

    The drug-loading properties of nanocarriers depend on the chemical structures and properties of their building blocks. Here, we customize telodendrimers (linear-dendritic copolymer) to design a nanocarrier with improved in vivo drug delivery characteristics. We do a virtual screen of a library of small molecules to identify the optimal building blocks for precise telodendrimer synthesis using peptide chemistry. With rationally designed telodendrimer architectures, we then optimize the drug binding affinity of a nanocarrier by introducing an optimal drug-binding molecule (DBM) without sacrificing the stability of the nanocarrier. To validate the computational predictions, we synthesize a series of nanocarriers and evaluate systematically for doxorubicin delivery. Rhein-containing nanocarriers have sustained drug release, prolonged circulation, increased tolerated dose, reduced toxicity, effective tumor targeting and superior anticancer effects owing to favourable doxorubicin-binding affinity and improved nanoparticle stability. This study demonstrates the feasibility and versatility of the de novo design of telodendrimer nanocarriers for specific drug molecules, which is a promising approach to transform nanocarrier development for drug delivery. PMID:26158623

  3. The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.

    PubMed

    Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente

    2015-08-10

    Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively.

  4. The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.

    PubMed

    Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente

    2015-01-01

    Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412

  5. The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic

    PubMed Central

    Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández

    2015-01-01

    Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412

  6. Validation of space/ground antenna control algorithms using a computer-aided design tool

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1995-01-01

    The validation of the algorithms for controlling the space-to-ground antenna subsystem for Space Station Alpha is an important step in assuring reliable communications. These algorithms have been developed and tested using a simulation environment based on a computer-aided design tool that can provide a time-based execution framework with variable environmental parameters. Our work this summer has involved the exploration of this environment and the documentation of the procedures used to validate these algorithms. We have installed a variety of tools in a laboratory of the Tracking and Communications division for reproducing the simulation experiments carried out on these algorithms to verify that they do meet their requirements for controlling the antenna systems. In this report, we describe the processes used in these simulations and our work in validating the tests used.

  7. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures

    PubMed Central

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  8. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.

    PubMed

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  9. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.

    PubMed

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer.

  10. Genetic algorithms in conceptual design of a light-weight, low-noise, tilt-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Wells, Valana L.

    1996-01-01

    This report outlines research accomplishments in the area of using genetic algorithms (GA) for the design and optimization of rotorcraft. It discusses the genetic algorithm as a search and optimization tool, outlines a procedure for using the GA in the conceptual design of helicopters, and applies the GA method to the acoustic design of rotors.

  11. A Dynamic Programming Algorithm for Optimal Design of Tidal Power Plants

    NASA Astrophysics Data System (ADS)

    Nag, B.

    2013-03-01

    A dynamic programming algorithm is proposed and demonstrated on a test case to determine the optimum operating schedule of a barrage tidal power plant to maximize the energy generation over a tidal cycle. Since consecutive sets of high and low tides can be predicted accurately for any tidal power plant site, this algorithm can be used to calculate the annual energy generation for different technical configurations of the plant. Thus an optimal choice of a tidal power plant design can be made from amongst different design configurations yielding the least cost of energy generation. Since this algorithm determines the optimal time of operation of sluice gate opening and turbine gates opening to maximize energy generation over a tidal cycle, it can also be used to obtain the annual schedule of operation of a tidal power plant and the minute-to-minute energy generation, for dissemination amongst power distribution utilities.

  12. Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven

    2010-05-01

    Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.

  13. Stochastic sensors designed for assessment of biomarkers specific to obesity.

    PubMed

    Cioates Negut, Catalina; Stefan-van Staden, Raluca-Ioana; Ungureanu, Eleonora-Mihaela; Udeanu, Denisa Ioana

    2016-09-01

    Two stochastic sensors based on the following oleamides: 1-adamantyloleamide and N,N-dimethyl-N-(2-oleylamidoethyl)amine physically immobilized on graphite paste were designed. The sensors were able to determine simultaneously from the whole blood of Wistar rats three biomarkers specific to obesity: leptin, interleukin-6 (IL-6) and plasminogen activator inhibitor 1 (PAI-1). The whole blood samples were obtained from Wistar rats treated with oleoylethanolamide (OEA), (Z)-N-[(1S)-2-hidroxy-1-(phenylmethyl) ethyl]-9octadecenamide (OLA), and with the aqueous solution of 1% Tween 80 used as solvent for oleamides formulations (control samples). The proposed sensors were very sensitive and reliable for the assay of obesity biomarkers in whole blood of rats. PMID:27288757

  14. Stochastic sensors designed for assessment of biomarkers specific to obesity.

    PubMed

    Cioates Negut, Catalina; Stefan-van Staden, Raluca-Ioana; Ungureanu, Eleonora-Mihaela; Udeanu, Denisa Ioana

    2016-09-01

    Two stochastic sensors based on the following oleamides: 1-adamantyloleamide and N,N-dimethyl-N-(2-oleylamidoethyl)amine physically immobilized on graphite paste were designed. The sensors were able to determine simultaneously from the whole blood of Wistar rats three biomarkers specific to obesity: leptin, interleukin-6 (IL-6) and plasminogen activator inhibitor 1 (PAI-1). The whole blood samples were obtained from Wistar rats treated with oleoylethanolamide (OEA), (Z)-N-[(1S)-2-hidroxy-1-(phenylmethyl) ethyl]-9octadecenamide (OLA), and with the aqueous solution of 1% Tween 80 used as solvent for oleamides formulations (control samples). The proposed sensors were very sensitive and reliable for the assay of obesity biomarkers in whole blood of rats.

  15. Novel Designs for Application Specific MEMS Pressure Sensors

    PubMed Central

    Fragiacomo, Giulio; Reck, Kasper; Lorenzen, Lasse; Thomsen, Erik V.

    2010-01-01

    In the framework of developing innovative microfabricated pressure sensors, we present here three designs based on different readout principles, each one tailored for a specific application. A touch mode capacitive pressure sensor with high sensitivity (14 pF/bar), low temperature dependence and high capacitive output signal (more than 100 pF) is depicted. An optical pressure sensor intrinsically immune to electromagnetic interference, with large pressure range (0–350 bar) and a sensitivity of 1 pm/bar is presented. Finally, a resonating wireless pressure sensor power source free with a sensitivity of 650 KHz/mmHg is described. These sensors will be related with their applications in harsh environment, distributed systems and medical environment, respectively. For many aspects, commercially available sensors, which in vast majority are piezoresistive, are not suited for the applications proposed. PMID:22163425

  16. An overview of field-specific designs of microbial EOR

    SciTech Connect

    Robertson, E.P.; Bala, G.A.; Fox, S.L.; Jackson, J.D.; Thomas, C.P.

    1995-12-31

    The selection and design of an MEOR process for application in a specific field involves geological, reservoir, and biological characterization. Microbially mediated oil recovery mechanisms (bigenic gas, biopolymers, and biosurfactants) are defined by the types of microorganisms used. The engineering and biological character of a given reservoir must be understood to correctly select a microbial system to enhance oil recovery. This paper discusses the methods used to evaluate three fields with distinct characteristics and production problems for the applicability of MEOR would not be applicable in two of the three fields considered. The development of a microbial oil recovery process for the third field appeared promising. Development of a bacterial consortium capable of producing the desired metabolites was initiated, and field isolates were characterized.

  17. An overview of field specific designs of microbial EOR

    SciTech Connect

    Robertson, E.P.; Bala, G.A.; Fox, S.L.; Jackson, J.D.; Thomas, C.P.

    1995-12-01

    The selection and design of a microbial enhanced oil recovery (MEOR) process for application in a specific field involves geological, reservoir, and biological characterization. Microbially mediated oil recovery mechanisms (biogenic gas, biopolymers, and biosurfactants) are defined by the types of microorganisms used. The engineering and biological character of a given reservoir must be understood to correctly select a microbial system to enhance oil recovery. The objective of this paper is to discuss the methods used to evaluate three fields with distinct characteristics and production problems for the applicability of MEOR technology. Reservoir characteristics and laboratory results indicated that MEOR would not be applicable in two of the three fields considered. The development of a microbial oil recovery process for the third field appeared promising. Development of a bacterial consortium capable of producing the desired metabolites was initiated and field isolates were characterized.

  18. DITDOS: A set of design specifications for distributed data inventories

    NASA Technical Reports Server (NTRS)

    King, T. A.; Walker, R. J.; Joy, S. P.

    1995-01-01

    The analysis of space science data often requires researchers to work with many different types of data. For instance, correlative analysis can require data from multiple instruments on a single spacecraft, multiple spacecraft, and ground-based data. Typically, data from each source are available in a different format and have been written on a different type of computer, and so much effort must be spent to read the data and convert it to the computer and format that the researchers use in their analysis. The large and ever-growing amount of data and the large investment by the scientific community in software that require a specific data format make using standard data formats impractical. A format-independent approach to accessing and analyzing disparate data is key to being able to deliver data to a diverse community in a timely fashion. The system in use at the Planetary Plasma Interactions (PPI) node of the NASA Planetary Data System (PDS) is based on the object-oriented Distributed Inventory Tracking and Data Ordering Specification (DITDOS), which describes data inventories in a storage independent way. The specifications have been designed to make it possible to build DITDOS compliant inventories that can exist on portable media such as CD-ROM's. The portable media can be moved within a system, or from system to system, and still be used without modification. Several applications have been developed to work with DITDOS compliant data holdings. One is a windows-based client/server application, which helps guide the user in the selection of data. A user can select a data base, then a data set, then a specific data file, and then either order the data and receive it immediately if it is online or request that it be brought online if it is not. A user can also view data by any of the supported methods. DITDOS makes it possible to use already existing applications for data-specific actions, and this is done whenever possible. Another application is a stand

  19. Application-specific coarse-grained reconfigurable array: architecture and design methodology

    NASA Astrophysics Data System (ADS)

    Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu

    2015-06-01

    Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.

  20. Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.

    2015-07-01

    The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.

  1. The Impact of Critical Thinking and Logico-Mathematical Intelligence on Algorithmic Design Skills

    ERIC Educational Resources Information Center

    Korkmaz, Ozgen

    2012-01-01

    The present study aims to reveal the impact of students' critical thinking and logico-mathematical intelligence levels of students on their algorithm design skills. This research was a descriptive study and carried out by survey methods. The sample consisted of 45 first-year educational faculty undergraduate students. The data was collected by…

  2. Should dialysis modalities be designed to remove specific uremic toxins?

    PubMed

    Baurmeister, Ulrich; Vienken, Joerg; Ward, Richard A

    2009-01-01

    The definition of optimal dialysis therapy remains elusive. Randomized clinical trials have neither supported using urea as a surrogate marker for uremic toxicity nor provided clear cut evidence in favor of larger solutes. Thus, where to focus resources in the development of new membranes, and therapies remains unclear. Three basic questions remain unanswered: (i) what solute(s) should be used as a marker for optimal dialysis; (ii) should dialytic therapies be designed to remove a specific solute; and (iii) how can current therapies be modified to provide better control of uremic toxicity? Identification of a single, well-defined uremic toxin appears to be unlikely as new analytical tools reveal an increasingly complex uremic milieu. As a result, it is probable that membranes and therapies should be designed for the nonspecific removal of a wide variety of solutes retained in uremia. Removal of the widest range of solutes can best be achieved using existing therapies that incorporate convection in conjunction with longer treatment times and more frequent treatments. Membranes capable of removing solutes over an expanded effective molecular size range can already be fabricated; however, their use will require novel approaches to conserve proteins, such as albumin.

  3. Application of an evolutionary algorithm in the optimal design of micro-sensor.

    PubMed

    Lu, Qibing; Wang, Pan; Guo, Sihai; Sheng, Buyun; Liu, Xingxing; Fan, Zhun

    2015-01-01

    This paper introduces an automatic bond graph design method based on genetic programming for the evolutionary design of micro-resonator. First, the system-level behavioral model is discussed, which based on genetic programming and bond graph. Then, the geometry parameters of components are automatically optimized, by using the genetic algorithm with constraints. To illustrate this approach, a typical device micro-resonator is designed as an example in biomedicine. This paper provides a new idea for the automatic optimization design of biomedical sensors by evolutionary calculation.

  4. A homotopy algorithm for synthesizing robust controllers for flexible structures via the maximum entropy design equations

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen

    1990-01-01

    One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.

  5. Multidisciplinary Design, Analysis, and Optimization Tool Development using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2008-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space A dministration Dryden Flight Research Center to automate analysis and design process by leveraging existing tools such as NASTRAN, ZAERO a nd CFD codes to enable true multidisciplinary optimization in the pr eliminary design stage of subsonic, transonic, supersonic, and hypers onic aircraft. This is a promising technology, but faces many challe nges in large-scale, real-world application. This paper describes cur rent approaches, recent results, and challenges for MDAO as demonstr ated by our experience with the Ikhana fire pod design.

  6. Computational design of the affinity and specificity of a therapeutic T cell receptor.

    PubMed

    Pierce, Brian G; Hellman, Lance M; Hossain, Moushumi; Singh, Nishant K; Vander Kooi, Craig W; Weng, Zhiping; Baker, Brian M

    2014-02-01

    T cell receptors (TCRs) are key to antigen-specific immunity and are increasingly being explored as therapeutics, most visibly in cancer immunotherapy. As TCRs typically possess only low-to-moderate affinity for their peptide/MHC (pMHC) ligands, there is a recognized need to develop affinity-enhanced TCR variants. Previous in vitro engineering efforts have yielded remarkable improvements in TCR affinity, yet concerns exist about the maintenance of peptide specificity and the biological impacts of ultra-high affinity. As opposed to in vitro engineering, computational design can directly address these issues, in theory permitting the rational control of peptide specificity together with relatively controlled increments in affinity. Here we explored the efficacy of computational design with the clinically relevant TCR DMF5, which recognizes nonameric and decameric epitopes from the melanoma-associated Melan-A/MART-1 protein presented by the class I MHC HLA-A2. We tested multiple mutations selected by flexible and rigid modeling protocols, assessed impacts on affinity and specificity, and utilized the data to examine and improve algorithmic performance. We identified multiple mutations that improved binding affinity, and characterized the structure, affinity, and binding kinetics of a previously reported double mutant that exhibits an impressive 400-fold affinity improvement for the decameric pMHC ligand without detectable binding to non-cognate ligands. The structure of this high affinity mutant indicated very little conformational consequences and emphasized the high fidelity of our modeling procedure. Overall, our work showcases the capability of computational design to generate TCRs with improved pMHC affinities while explicitly accounting for peptide specificity, as well as its potential for generating TCRs with customized antigen targeting capabilities.

  7. A few results for using genetic algorithms in the design of electrical machines

    SciTech Connect

    Wurtz, F.; Richomme, M.; Bigeon, J.; Sabonnadiere, J.C.

    1997-03-01

    Genetic algorithms (GAs) seem to be attractive for the design of electrical machines but their main difficulty is to find a configuration so that they are efficient. This paper exposes a criterion and a methodology the authors have imagined to find efficient configurations. The first configuration they obtained will then be detailed. The results based on this configuration will be exposed with an example of a design problem.

  8. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    PubMed

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively. PMID:25874500

  9. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    PubMed

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively.

  10. Application of hybrid evolutionary algorithms to low exhaust emission diesel engine design

    NASA Astrophysics Data System (ADS)

    Jeong, S.; Obayashi, S.; Minemura, Y.

    2008-01-01

    A hybrid evolutionary algorithm, consisting of a genetic algorithm (GA) and particle swarm optimization (PSO), is proposed. Generally, GAs maintain diverse solutions of good quality in multi-objective problems, while PSO shows fast convergence to the optimum solution. By coupling these algorithms, GA will compensate for the low diversity of PSO, while PSO will compensate for the high computational costs of GA. The hybrid algorithm was validated using standard test functions. The results showed that the hybrid algorithm has better performance than either a pure GA or pure PSO. The method was applied to an engineering design problem—the geometry of diesel engine combustion chamber reducing exhaust emissions such as NOx, soot and CO was optimized. The results demonstrated the usefulness of the present method to this engineering design problem. To identify the relation between exhaust emissions and combustion chamber geometry, data mining was performed with a self-organising map (SOM). The results indicate that the volume near the lower central part of the combustion chamber has a large effect on exhaust emissions and the optimum chamber geometry will vary depending on fuel injection angle.

  11. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  12. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  13. Reliable design of H-2 optimal reduced-order controllers via a homotopy algorithm

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G.; Richter, Stephen; Davis, Larry D.

    1992-01-01

    Due to control processor limitations, the design of reduced-order controllers is an active area of research. Suboptimal methods based on truncating the order of the corresponding linear-quadratic-Gaussian (LQG) compensator tend to fail if the requested controller dimension is sufficiently small and/or the requested controller authority is sufficiently high. Also, traditional parameter optimization approaches have only local convergence properties. This paper discusses a homotopy algorithm for optimal reduced-order control that has global convergence properties. The exposition is for discrete-time systems. The algorithm has been implemented in MATLAB and is applied to a benchmark problem.

  14. Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design

    NASA Technical Reports Server (NTRS)

    Whorton, Mark; Buschek, Harald; Calise, Anthony J.

    1996-01-01

    Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.

  15. Sizing of complex structure by the integration of several different optimal design algorithms

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1974-01-01

    Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.

  16. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  17. A firefly algorithm for solving competitive location-design problem: a case study

    NASA Astrophysics Data System (ADS)

    Sadjadi, Seyed Jafar; Ashtiani, Milad Gorji; Ramezanian, Reza; Makui, Ahmad

    2016-07-01

    This paper aims at determining the optimal number of new facilities besides specifying both the optimal location and design level of them under the budget constraint in a competitive environment by a novel hybrid continuous and discrete firefly algorithm. A real-world application of locating new chain stores in the city of Tehran, Iran, is used and the results are analyzed. In addition, several examples have been solved to evaluate the efficiency of the proposed model and algorithm. The results demonstrate that the performed method provides good-quality results for the test problems.

  18. New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.

    PubMed

    Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng

    2016-05-16

    The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters. PMID:27409895

  19. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time. PMID:22254462

  20. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  1. Study on Ply Orientation Optimum Design for Composite Material Structure Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Ma, Ai-Jun

    2016-05-01

    To find the optimum design of ply orientation for composite material structure, we proposed a method based on genetic algorithm and executed on a composite frame case. Firstly we gave the descriptions of the structure including solid model and mechanical property of the material and then created the finite element model of composite frame and set a static load step to get the displacement of cared node. Then we created the optimization mathematical model and used genetic algorithm to find the global optimal solution of the optimization problem, and finally achieved the best layer angle of the composite material case. The ply orientation optimum design made a good performance as the results showed that the objective function dropped by 16.6%. This case can might provide a reference for ply orientation optimum design of similar composite structure.

  2. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  3. Use of the particle swarm optimization algorithm for second order design of levelling networks

    NASA Astrophysics Data System (ADS)

    Yetkin, Mevlut; Inal, Cevat; Yigit, Cemal Ozer

    2009-08-01

    The weight problem in geodetic networks can be dealt with as an optimization procedure. This classic problem of geodetic network optimization is also known as second-order design. The basic principles of geodetic network optimization are reviewed. Then the particle swarm optimization (PSO) algorithm is applied to a geodetic levelling network in order to solve the second-order design problem. PSO, which is an iterative-stochastic search algorithm in swarm intelligence, emulates the collective behaviour of bird flocking, fish schooling or bee swarming, to converge probabilistically to the global optimum. Furthermore, it is a powerful method because it is easy to implement and computationally efficient. Second-order design of a geodetic levelling network using PSO yields a practically realizable solution. It is also suitable for non-linear matrix functions that are very often encountered in geodetic network optimization. The fundamentals of the method and a numeric example are given.

  4. International multidimensional authenticity specification (IMAS) algorithm for detection of commercial pomegranate juice adulteration.

    PubMed

    Zhang, Yanjun; Krueger, Dana; Durst, Robert; Lee, Rupo; Wang, David; Seeram, Navindra; Heber, David

    2009-03-25

    The pomegranate fruit ( Punica granatum ) has become an international high-value crop for the production of commercial pomegranate juice (PJ). The perceived consumer value of PJ is due in large part to its potential health benefits based on a significant body of medical research conducted with authentic PJ. To establish criteria for authenticating PJ, a new International Multidimensional Authenticity Specifications (IMAS) algorithm was developed through consideration of existing databases and comprehensive chemical characterization of 45 commercial juice samples from 23 different manufacturers in the United States. In addition to analysis of commercial juice samples obtained in the United States, data from other analyses of pomegranate juice and fruits including samples from Iran, Turkey, Azerbaijan, Syria, India, and China were considered in developing this protocol. There is universal agreement that the presence of a highly constant group of six anthocyanins together with punicalagins characterizes polyphenols in PJ. At a total sugar concentration of 16 degrees Brix, PJ contains characteristic sugars including mannitol at >0.3 g/100 mL. Ratios of glucose to mannitol of 4-15 and of glucose to fructose of 0.8-1.0 are also characteristic of PJ. In addition, no sucrose should be present because of isomerase activity during commercial processing. Stable isotope ratio mass spectrometry as > -25 per thousand assures that there is no added corn or cane sugar added to PJ. Sorbitol was present at <0.025 g/100 mL; maltose and tartaric acid were not detected. The presence of the amino acid proline at >25 mg/L is indicative of added grape products. Malic acid at >0.1 g/100 mL indicates adulteration with apple, pear, grape, cherry, plum, or aronia juice. Other adulteration methods include the addition of highly concentrated aronia, blueberry, or blackberry juices or natural grape pigments to poor-quality juices to imitate the color of pomegranate juice, which results in

  5. International multidimensional authenticity specification (IMAS) algorithm for detection of commercial pomegranate juice adulteration.

    PubMed

    Zhang, Yanjun; Krueger, Dana; Durst, Robert; Lee, Rupo; Wang, David; Seeram, Navindra; Heber, David

    2009-03-25

    The pomegranate fruit ( Punica granatum ) has become an international high-value crop for the production of commercial pomegranate juice (PJ). The perceived consumer value of PJ is due in large part to its potential health benefits based on a significant body of medical research conducted with authentic PJ. To establish criteria for authenticating PJ, a new International Multidimensional Authenticity Specifications (IMAS) algorithm was developed through consideration of existing databases and comprehensive chemical characterization of 45 commercial juice samples from 23 different manufacturers in the United States. In addition to analysis of commercial juice samples obtained in the United States, data from other analyses of pomegranate juice and fruits including samples from Iran, Turkey, Azerbaijan, Syria, India, and China were considered in developing this protocol. There is universal agreement that the presence of a highly constant group of six anthocyanins together with punicalagins characterizes polyphenols in PJ. At a total sugar concentration of 16 degrees Brix, PJ contains characteristic sugars including mannitol at >0.3 g/100 mL. Ratios of glucose to mannitol of 4-15 and of glucose to fructose of 0.8-1.0 are also characteristic of PJ. In addition, no sucrose should be present because of isomerase activity during commercial processing. Stable isotope ratio mass spectrometry as > -25 per thousand assures that there is no added corn or cane sugar added to PJ. Sorbitol was present at <0.025 g/100 mL; maltose and tartaric acid were not detected. The presence of the amino acid proline at >25 mg/L is indicative of added grape products. Malic acid at >0.1 g/100 mL indicates adulteration with apple, pear, grape, cherry, plum, or aronia juice. Other adulteration methods include the addition of highly concentrated aronia, blueberry, or blackberry juices or natural grape pigments to poor-quality juices to imitate the color of pomegranate juice, which results in

  6. Measurement of Spray Drift with a Specifically Designed Lidar System.

    PubMed

    Gregorio, Eduard; Torrent, Xavier; Planas de Martí, Santiago; Solanelles, Francesc; Sanz, Ricardo; Rocadenbosch, Francesc; Masip, Joan; Ribes-Dasi, Manel; Rosell-Polo, Joan R

    2016-01-01

    Field measurements of spray drift are usually carried out by passive collectors and tracers. However, these methods are labour- and time-intensive and only provide point- and time-integrated measurements. Unlike these methods, the light detection and ranging (lidar) technique allows real-time measurements, obtaining information with temporal and spatial resolution. Recently, the authors have developed the first eye-safe lidar system specifically designed for spray drift monitoring. This prototype is based on a 1534 nm erbium-doped glass laser and an 80 mm diameter telescope, has scanning capability, and is easily transportable. This paper presents the results of the first experimental campaign carried out with this instrument. High coefficients of determination (R² > 0.85) were observed by comparing lidar measurements of the spray drift with those obtained by horizontal collectors. Furthermore, the lidar system allowed an assessment of the drift reduction potential (DRP) when comparing low-drift nozzles with standard ones, resulting in a DRP of 57% (preliminary result) for the tested nozzles. The lidar system was also used for monitoring the evolution of the spray flux over the canopy and to generate 2-D images of these plumes. The developed instrument is an advantageous alternative to passive collectors and opens the possibility of new methods for field measurement of spray drift. PMID:27070613

  7. Measurement of Spray Drift with a Specifically Designed Lidar System

    PubMed Central

    Gregorio, Eduard; Torrent, Xavier; Planas de Martí, Santiago; Solanelles, Francesc; Sanz, Ricardo; Rocadenbosch, Francesc; Masip, Joan; Ribes-Dasi, Manel; Rosell-Polo, Joan R.

    2016-01-01

    Field measurements of spray drift are usually carried out by passive collectors and tracers. However, these methods are labour- and time-intensive and only provide point- and time-integrated measurements. Unlike these methods, the light detection and ranging (lidar) technique allows real-time measurements, obtaining information with temporal and spatial resolution. Recently, the authors have developed the first eye-safe lidar system specifically designed for spray drift monitoring. This prototype is based on a 1534 nm erbium-doped glass laser and an 80 mm diameter telescope, has scanning capability, and is easily transportable. This paper presents the results of the first experimental campaign carried out with this instrument. High coefficients of determination (R2 > 0.85) were observed by comparing lidar measurements of the spray drift with those obtained by horizontal collectors. Furthermore, the lidar system allowed an assessment of the drift reduction potential (DRP) when comparing low-drift nozzles with standard ones, resulting in a DRP of 57% (preliminary result) for the tested nozzles. The lidar system was also used for monitoring the evolution of the spray flux over the canopy and to generate 2-D images of these plumes. The developed instrument is an advantageous alternative to passive collectors and opens the possibility of new methods for field measurement of spray drift. PMID:27070613

  8. Measurement of Spray Drift with a Specifically Designed Lidar System.

    PubMed

    Gregorio, Eduard; Torrent, Xavier; Planas de Martí, Santiago; Solanelles, Francesc; Sanz, Ricardo; Rocadenbosch, Francesc; Masip, Joan; Ribes-Dasi, Manel; Rosell-Polo, Joan R

    2016-04-08

    Field measurements of spray drift are usually carried out by passive collectors and tracers. However, these methods are labour- and time-intensive and only provide point- and time-integrated measurements. Unlike these methods, the light detection and ranging (lidar) technique allows real-time measurements, obtaining information with temporal and spatial resolution. Recently, the authors have developed the first eye-safe lidar system specifically designed for spray drift monitoring. This prototype is based on a 1534 nm erbium-doped glass laser and an 80 mm diameter telescope, has scanning capability, and is easily transportable. This paper presents the results of the first experimental campaign carried out with this instrument. High coefficients of determination (R² > 0.85) were observed by comparing lidar measurements of the spray drift with those obtained by horizontal collectors. Furthermore, the lidar system allowed an assessment of the drift reduction potential (DRP) when comparing low-drift nozzles with standard ones, resulting in a DRP of 57% (preliminary result) for the tested nozzles. The lidar system was also used for monitoring the evolution of the spray flux over the canopy and to generate 2-D images of these plumes. The developed instrument is an advantageous alternative to passive collectors and opens the possibility of new methods for field measurement of spray drift.

  9. Space Station Cathode Design, Performance, and Operating Specifications

    NASA Technical Reports Server (NTRS)

    Patterson, Michael J.; Verhey, Timothy R.; Soulas, George; Zakany, James

    1998-01-01

    A plasma contactor system was baselined for the International Space Station (ISS) to eliminate/mitigate damaging interactions with the space environment. The system represents a dual-use technology which is a direct outgrowth of the NASA electric propulsion program and, in particular, the technology development efforts on ion thruster systems. The plasma contactor includes a hollow cathode assembly (HCA), a power electronics unit, and a xenon gas feed system. Under a pre-flight development program, these subsystems were taken to the level of maturity appropriate for transfer to U.S. industry for final development. NASA's Lewis Research Center was subsequently requested by ISS to manufacture and deliver the engineering model, qualification model, and flight HCA units. To date, multiple units have been built. One cathode has demonstrated approximately 28,000 hours lifetime, two development unit HCAs have demonstrated over 10,000 hours lifetime, and one development unit HCA has demonstrated more than 32,000 ignitions. All 8 flight HCAs have been manufactured, acceptance tested, and are ready for delivery to the flight contractor. This paper discusses the requirements, mechanical design, performance, operating specifications, and schedule for the plasma contactor flight HCAs.

  10. SU-FF-T-668: A Simple Algorithm for Range Modulation Wheel Design in Proton Therapy

    SciTech Connect

    Nie, X; Nazaryan, Vahagn; Gueye, Paul; Keppel, Cynthia

    2009-06-01

    Purpose: To develop a simple algorithm in designing the range modulation wheel to generate a very smooth Spread-Out Bragg peak (SOBP) for proton therapy.Method and Materials: A simple algorithm has been developed to generate the weight factors in corresponding pristine Bragg peaks which composed a smooth SOBP in proton therapy. We used a modified analytical Bragg peak function based on Monte Carol simulation tool-kits of Geant4 as pristine Bragg peaks input in our algorithm. A simple METLAB(R) Quad Program was introduced to optimize the cost function in our algorithm. Results: We found out that the existed analytical function of Bragg peak can't directly use as pristine Bragg peak dose-depth profile input file in optimization of the weight factors since this model didn't take into account of the scattering factors introducing from the range shifts in modifying the proton beam energies. We have done Geant4 simulations for proton energy of 63.4 MeV with a 1.08 cm SOBP for variation of pristine Bragg peaks which composed this SOBP and modified the existed analytical Bragg peak functions for their peak heights, ranges of R{sub 0}, and Gaussian energies {sigma}{sub E}. We found out that 19 pristine Bragg peaks are enough to achieve a flatness of 1.5% of SOBP which is the best flatness in the publications. Conclusion: This work develops a simple algorithm to generate the weight factors which is used to design a range modulation wheel to generate a smooth SOBP in protonradiation therapy. We have found out that a medium number of pristine Bragg peaks are enough to generate a SOBP with flatness less than 2%. It is potential to generate data base to store in the treatment plan to produce a clinic acceptable SOBP by using our simple algorithm.

  11. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  12. Validation and application of modeling algorithms for the design of molecularly imprinted polymers.

    PubMed

    Liu, Bing; Ou, Lulu; Zhang, Fuyuan; Zhang, Zhijun; Li, Hongying; Zhu, Mengyu; Wang, Shuo

    2014-12-01

    In the study, four different semiempirical algorithms, modified neglect of diatomic overlap, a reparameterization of Austin Model 1, complete neglect of differential overlap and typed neglect of differential overlap, have been applied for the energy optimization of template, monomer, and template-monomer complexes of imprinted polymers. For phosmet-, estrone-, and metolcarb-imprinted polymers, the binding energies of template-monomer complexes were calculated and the docking configures were assessed in different molar ratio of template/monomer. It was found that two algorithms were not suitable for calculating the binding energy in template-monomers complex system. For the other algorithms, the obtained optimum molar ratio of template and monomers were consistent with the experimental results. Therefore, two algorithms have been selected and applied for the preparation of enrofloxacin-imprinted polymers. Meanwhile using a different molar ratio of template and monomer, we prepared imprinted polymers and nonimprinted polymers, and evaluated the adsorption to template. It was verified that the experimental results were in good agreement with the modeling results. As a result, the semiempirical algorithm had certain feasibility in designing the preparation of imprinted polymers.

  13. Iterative Fourier transform algorithm: different approaches to diffractive optical element design

    NASA Astrophysics Data System (ADS)

    Skeren, Marek; Richter, Ivan; Fiala, Pavel

    2002-10-01

    This contribution focuses on the study and comparison of different design approaches for designing phase-only diffractive optical elements (PDOEs) for different possible applications in laser beam shaping. Especially, new results and approaches, concerning the iterative Fourier transform algorithm, are analyzed, implemented, and compared. Namely, various approaches within the iterative Fourier transform algorithm (IFTA) are analyzed for the case of phase-only diffractive optical elements with quantizied phase levels (either binary or multilevel structures). First, the general scheme of the IFTA iterative approach with partial quantization is briefly presented and discussed. Then, the special assortment of the general IFTA scheme is given with respect to quantization constraint strategies. Based on such a special classification, the three practically interesting approaches are chosen, further-analyzed, and compared to eachother. The performance of these algorithms is compared in detail in terms of the signal-to-noise ratio characteristic developments with respect to the numberof iterations, for various input diffusive-type objects chose. Also, the performance is documented on the complex spectra developments for typical computer reconstruction results. The advantages and drawbacks of all approaches are discussed, and a brief guide on the choice of a particular approach for typical design tasks is given. Finally, the two ways of amplitude elimination within the design procedure are considered, namely the direct elimination and partial elimination of the amplitude of the complex hologram function.

  14. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  15. Digital IIR Filters Design Using Differential Evolution Algorithm with a Controllable Probabilistic Population Size

    PubMed Central

    Zhu, Wu; Fang, Jian-an; Tang, Yang; Zhang, Wenbing; Du, Wei

    2012-01-01

    Design of a digital infinite-impulse-response (IIR) filter is the process of synthesizing and implementing a recursive filter network so that a set of prescribed excitations results a set of desired responses. However, the error surface of IIR filters is usually non-linear and multi-modal. In order to find the global minimum indeed, an improved differential evolution (DE) is proposed for digital IIR filter design in this paper. The suggested algorithm is a kind of DE variants with a controllable probabilistic (CPDE) population size. It considers the convergence speed and the computational cost simultaneously by nonperiodic partial increasing or declining individuals according to fitness diversities. In addition, we discuss as well some important aspects for IIR filter design, such as the cost function value, the influence of (noise) perturbations, the convergence rate and successful percentage, the parameter measurement, etc. As to the simulation result, it shows that the presented algorithm is viable and comparable. Compared with six existing State-of-the-Art algorithms-based digital IIR filter design methods obtained by numerical experiments, CPDE is relatively more promising and competitive. PMID:22808191

  16. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486

  17. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively.

  18. SU-E-T-305: Study of the Eclipse Electron Monte Carlo Algorithm for Patient Specific MU Calculations

    SciTech Connect

    Wang, X; Qi, S; Agazaryan, N; DeMarco, J

    2014-06-01

    Purpose: To evaluate the Eclipse electron Monte Carlo (eMC) algorithm based on patient specific monitor unit (MU) calculations, and to propose a new factor which quantitatively predicts the discrepancy of MUs between the eMC algorithm and hand calculations. Methods: Electron treatments were planned for 61 patients on Eclipse (Version 10.0) using the eMC algorithm for Varian TrueBeam linear accelerators. For each patient, the same treatment beam angle was kept for a point dose calculation at dmax performed with the reference condition, which used an open beam with a 15×15 cm2 size cone and 100 SSD. A patient specific correction factor (PCF) was obtained by getting the ratio between this point dose and the calibration dose, which is 1 cGy per MU delivered at dmax. The hand calculation results were corrected by the PCFs and compared with MUs from the treatment plans. Results: The MU from the treatment plans were in average (7.1±6.1)% higher than the hand calculations. The average MU difference between the corrected hand calculations and the eMC treatment plans was (0.07±3.48)%. A correlation coefficient of 0.8 was found between (1-PCF) and the percentage difference between the treatment plan and hand calculations. Most outliers were treatment plans with small beam opening (< 4 cm) and low energy beams (6 and 9 MeV). Conclusion: For CT-based patient treatment plans, the eMC algorithm tends to generate a larger MU than hand calculations. Caution should be taken for eMC patient plans with small field sizes and low energy beams. We hypothesize that the PCF ratio reflects the influence of patient surface curvature and tissue inhomogeneity to patient specific percent depth dose (PDD) curve and MU calculations in eMC algorithm.

  19. Thermal design of spiral heat exchangers and heat pipes through global best algorithm

    NASA Astrophysics Data System (ADS)

    Turgut, Oğuz Emrah; Çoban, Mustafa Turhan

    2016-07-01

    This study deals with global best algorithm based thermal design of spiral heat exchangers and heat pipes. Spiral heat exchangers are devices which are highly efficient in extremely dirty and fouling process duties. Spirals inherent in design maintain high heat transfer coefficients while avoiding hazardous effects of fouling and uneven fluid distribution in the channels. Heat pipes have wide usage in industry. Thanks to the two phase cycle which takes part in operation, they can transfer high amount of heat with a negligible temperature gradient. In this work, a new stochastic based optimization method global best algorithm is applied for multi objective optimization of spiral heat exchangers as well as single objective optimization for heat pipes. Global best algorithm is easy-to-implement, free of derivatives and it can be reliably applied to any optimization problem. Case studies taken from the literature approaches are solved by the proposed algorithm and results obtained from the literature approaches are compared with thosed acquired by GBA. Comparisons reveal that GBA attains better results than literature studies in terms of solution accuracy and efficiency.

  20. Interactive evolutionary computation with minimum fitness evaluation requirement and offline algorithm design.

    PubMed

    Ishibuchi, Hisao; Sudo, Takahiko; Nojima, Yusuke

    2016-01-01

    In interactive evolutionary computation (IEC), each solution is evaluated by a human user. Usually the total number of examined solutions is very small. In some applications such as hearing aid design and music composition, only a single solution can be evaluated at a time by a human user. Moreover, accurate and precise numerical evaluation is difficult. Based on these considerations, we formulated an IEC model with the minimum requirement for fitness evaluation ability of human users under the following assumptions: They can evaluate only a single solution at a time, they can memorize only a single previous solution they have just evaluated, their evaluation result on the current solution is whether it is better than the previous one or not, and the best solution among the evaluated ones should be identified after a pre-specified number of evaluations. In this paper, we first explain our IEC model in detail. Next we propose a ([Formula: see text])ES-style algorithm for our IEC model. Then we propose an offline meta-level approach to automated algorithm design for our IEC model. The main feature of our approach is the use of a different mechanism (e.g., mutation, crossover, random initialization) to generate each solution to be evaluated. Through computational experiments on test problems, our approach is compared with the ([Formula: see text])ES-style algorithm where a solution generation mechanism is pre-specified and fixed throughout the execution of the algorithm. PMID:27026888

  1. Thickness determination in textile material design: dynamic modeling and numerical algorithms

    NASA Astrophysics Data System (ADS)

    Xu, Dinghua; Ge, Meibao

    2012-03-01

    Textile material design is of paramount importance in the study of functional clothing design. It is therefore important to determine the dynamic heat and moisture transfer characteristics in the human body-clothing-environment system, which directly determine the heat-moisture comfort level of the human body. Based on a model of dynamic heat and moisture transfer with condensation in porous fabric at low temperature, this paper presents a new inverse problem of textile thickness determination (IPTTD). Adopting the idea of the least-squares method, we formulate the IPTTD into a function minimization problem. By means of the finite-difference method, quasi-solution method and direct search method for one-dimensional minimization problems, we construct iterative algorithms of the approximated solution for the IPTTD. Numerical simulation results validate the formulation of the IPTTD and demonstrate the effectiveness of the proposed numerical algorithms.

  2. Development of an algorithm to provide awareness in choosing study designs for inclusion in systematic reviews of healthcare interventions: a method study

    PubMed Central

    Peinemann, Frank; Kleijnen, Jos

    2015-01-01

    Objectives To develop an algorithm that aims to provide guidance and awareness for choosing multiple study designs in systematic reviews of healthcare interventions. Design Method study: (1) To summarise the literature base on the topic. (2) To apply the integration of various study types in systematic reviews. (3) To devise decision points and outline a pragmatic decision tree. (4) To check the plausibility of the algorithm by backtracking its pathways in four systematic reviews. Results (1) The results of our systematic review of the published literature have already been published. (2) We recaptured the experience from our four previously conducted systematic reviews that required the integration of various study types. (3) We chose length of follow-up (long, short), frequency of events (rare, frequent) and types of outcome as decision points (death, disease, discomfort, disability, dissatisfaction) and aligned the study design labels according to the Cochrane Handbook. We also considered practical or ethical concerns, and the problem of unavailable high-quality evidence. While applying the algorithm, disease-specific circumstances and aims of interventions should be considered. (4) We confirmed the plausibility of the pathways of the algorithm. Conclusions We propose that the algorithm can assist to bring seminal features of a systematic review with multiple study designs to the attention of anyone who is planning to conduct a systematic review. It aims to increase awareness and we think that it may reduce the time burden on review authors and may contribute to the production of a higher quality review. PMID:26289450

  3. A mission-oriented orbit design method of remote sensing satellite for region monitoring mission based on evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Zhang, Jing; Yao, Huang

    2015-12-01

    Remote sensing satellites play an increasingly prominent role in environmental monitoring and disaster rescue. Taking advantage of almost the same sunshine condition to same place and global coverage, most of these satellites are operated on the sun-synchronous orbit. However, it brings some problems inevitably, the most significant one is that the temporal resolution of sun-synchronous orbit satellite can't satisfy the demand of specific region monitoring mission. To overcome the disadvantages, two methods are exploited: the first one is to build satellite constellation which contains multiple sunsynchronous satellites, just like the CHARTER mechanism has done; the second is to design non-predetermined orbit based on the concrete mission demand. An effective method for remote sensing satellite orbit design based on multiobjective evolution algorithm is presented in this paper. Orbit design problem is converted into a multi-objective optimization problem, and a fast and elitist multi-objective genetic algorithm is utilized to solve this problem. Firstly, the demand of the mission is transformed into multiple objective functions, and the six orbit elements of the satellite are taken as genes in design space, then a simulate evolution process is performed. An optimal resolution can be obtained after specified generation via evolution operation (selection, crossover, and mutation). To examine validity of the proposed method, a case study is introduced: Orbit design of an optical satellite for regional disaster monitoring, the mission demand include both minimizing the average revisit time internal of two objectives. The simulation result shows that the solution for this mission obtained by our method meet the demand the users' demand. We can draw a conclusion that the method presented in this paper is efficient for remote sensing orbit design.

  4. Development and benefit analysis of a sector design algorithm for terminal dynamic airspace configuration

    NASA Astrophysics Data System (ADS)

    Sciandra, Vincent

    performance of the algorithm generated sectors to the current sectors for a variety of configurations and scenarios, and comparing these results to those of the current sectors. The effect of dynamic airspace configurations will then be tested by observing the effects of update rate on the algorithm generated sector results. Finally, the algorithm will be used with simulated data, whose evaluation would show the ability of the sector design algorithm to meet the objectives of the NextGen system. Upon validation, the algorithm may be successfully incorporated into a larger Terminal Flow Algorithm, developed by our partners at Mosaic ATM, as the final step in the TDAC process.

  5. Design And Implementation Of A Multi-Sensor Fusion Algorithm On A Hypercube Computer Architecture

    NASA Astrophysics Data System (ADS)

    Glover, Charles W.

    1990-03-01

    was obtained. This paper will also discuss the design of a completely parallel MSI algorithm.

  6. Miniature lens design and optimization with liquid lens element via genetic algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Yi-Chin; Tsai, Chen-Mu

    2008-07-01

    This paper proposes a design and optimization method via (GA) genetic algorithm applied to a newly developed optical element: the liquid lens as a fast focus group. This design takes advantage of quick focus which works simultaneously with modern CMOS sensors in order to significantly improve image quality. Such improvement is important, especially for medical imaging technology such as laparoscopy. However, this optical design with a liquid lens element has not achieved success yet; one of the major reasons is the lack of anomalous dispersion glass and their Abbe number, which complicates the correction of aberrations, limits its availability. From the point of view of aberration theory, most aberrations, particularly in the axial chromatic and lateral color aberration of an optical lens, play the same role as the selection of optical glass. Therefore, in the present research, some optical layouts with a liquid lens are first discussed; next, genetic algorithms are used to replace traditional LDS (least damping square) to search for the best solution using a liquid lens and find the best glass sets for the combination of anomalous dispersion glass and materials inside a liquid lens. During optimization work, the 'geometric optics' theory and 'multiple dynamic crossover and random gene mutation' technique are employed. Through implementation of the algorithms proposed in this paper, satisfactory elimination of axial and lateral color aberration can be achieved.

  7. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  8. Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Xiaowei; Slomka, Piotr J.; Diaz-Zamudio, Mariana; Germano, Guido; Berman, Daniel S.; Terzopoulos, Demetri; Dey, Damini

    2015-03-01

    Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.

  9. Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1995-01-01

    A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.

  10. Designing LED Array for Uniform Illumination Based on Local Search Algorithm

    NASA Astrophysics Data System (ADS)

    Lei, P.; Wang, Q.; Zou, H.

    2014-03-01

    We propose a numerical optimization method based on local search algorithm to design an LED array for a highly uniform illumination distribution. In the first place, an initial LED array is randomly generated and the corresponding value of the objective function is calculated. In the second place, the value of the objective function is iteratively improved by applying local changes of the LED array until the objective function value can not be improved. This method can automatically design an array of LEDs with different luminous intensity value and distribution. Computer simulations show that the near-optimal LED array with highly uniform illumination distribution on target plane is obtained by this method.

  11. Design specifications for manufacturability of MCM-C multichip modules

    SciTech Connect

    Blazek, R.; Desch, J.; Kautz, D.; Morgenstern, H.

    1996-10-01

    A comprehensive guide for ceramic-based multichip modules (MCMS) has been developed by AlliedSignal Federal Manufacturing & Technologies (FM&T) to provide manufacturability information for its customers about how MCM designs can be affected by existing process and equipment capabilities. This guide extends beyond a listing of design rules by providing information about design layout, low- temperature cofired ceramic (LTCC) substrate fabrication, MCM assembly and electrical testing Electrical mechanical packaging, environmental, and producibility issues are reviewed. Examples of three MCM designs are shown in the form of packaging cross-sectional views, LTCC substrate layer allocations, and overall MCM photographs. The guide has proven to be an effective tool for enhancing communications between MCM designers and manufacturers and producing a microcircuit that meets design requirements within the limitations of process capabilities.

  12. Smart energy management and low-power design of sensor and actuator nodes on algorithmic level for self-powered sensorial materials and robotics

    NASA Astrophysics Data System (ADS)

    Bosse, Stefan; Behrmann, Thomas

    2011-06-01

    We propose and demonstrate a design methodology for embedded systems satisfying low power requirements suitable for self-powered sensor and actuator nodes. This design methodology focuses on 1. smart energy management at runtime and 2. application-specific System-On- Chip (SoC) design at design time, contributing to low-power systems on both algorithmic and technology level. Smart energy management is performed spatially at runtime by a behaviour-based or state-action-driven selection from a set of different (implemented) algorithms classified by their demand of computation power, and temporally by varying data processing rates. It can be shown that power/energy consumption of an application-specific SoC design depends strongly on computation complexity. Signal and control processing is modelled on abstract level using signal flow diagrams. These signal flow graphs are mapped to Petri Nets to enable direct high-level synthesis of digital SoC circuits using a multi-process architecture with the Communicating-Sequential-Process model on execution level. Power analysis using simulation techniques on gatelevel provides input for the algorithmic selection during runtime of the system, leading to a closed-loop design flow. Additionally, the signal-flow approach enables power management by varying the signal flow and data processing rates depending on actual energy consumption, estimated energy deposit, and required Quality-of-Service.

  13. Optimization design of satellite separation systems based on Multi-Island Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Xingzhi; Chen, Xiaoqian; Zhao, Yong; Yao, Wen

    2014-03-01

    The separation systems are crucial for the launch of satellites. With respect to the existing design issues of satellite separation systems, an optimization design approach based on Multi-Island Genetic Algorithm is proposed, and a hierarchical optimization of system mass and separation angular velocity is designed. Multi-Island Genetic Algorithm is studied for the problem and the optimization parameters are discussed. Dynamic analysis of ADAMS used to validate the designs is integrated with iSIGHT. Then the optimization method is employed for a typical problem using the helical compression spring mechanism, and the corresponding objective functions are derived. It turns out that the mass of compression spring catapult is decreased by 30.7% after optimization and the angular velocity can be minimized considering spring stiffness errors. Moreover, ground tests and on-orbit flight indicate that the error of separation speed is controlled within 1% and the angular velocity is reduced by nearly 90%, which proves the design result and the optimization approach.

  14. Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data

    PubMed Central

    Pal, Doyel; Chen, Tingting; Khethavath, Praveen

    2014-01-01

    Background Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients’ privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. Objective To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. Methods To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. Results We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other’s database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error

  15. Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs

    NASA Astrophysics Data System (ADS)

    Nikolić, Zoran; Nguyen, Ha Thai; Frantz, Gene

    2007-12-01

    Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs) to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.

  16. Transportation network with fluctuating input/output designed by the bio-inspired Physarum algorithm.

    PubMed

    Watanabe, Shin; Takamatsu, Atsuko

    2014-01-01

    In this paper, we propose designing transportation network topology and traffic distribution under fluctuating conditions using a bio-inspired algorithm. The algorithm is inspired by the adaptive behavior observed in an amoeba-like organism, plasmodial slime mold, more formally known as plasmodium of Physarum plycephalum. This organism forms a transportation network to distribute its protoplasm, the fluidic contents of its cell, throughout its large cell body. In this process, the diameter of the transportation tubes adapts to the flux of the protoplasm. The Physarum algorithm, which mimics this adaptive behavior, has been widely applied to complex problems, such as maze solving and designing the topology of railroad grids, under static conditions. However, in most situations, environmental conditions fluctuate; for example, in power grids, the consumption of electric power shows daily, weekly, and annual periodicity depending on the lifestyles or the business needs of the individual consumers. This paper studies the design of network topology and traffic distribution with oscillatory input and output traffic flows. The network topology proposed by the Physarum algorithm is controlled by a parameter of the adaptation process of the tubes. We observe various rich topologies such as complete mesh, partial mesh, Y-shaped, and V-shaped networks depending on this adaptation parameter and evaluate them on the basis of three performance functions: loss, cost, and vulnerability. Our results indicate that consideration of the oscillatory conditions and the phase-lags in the multiple outputs of the network is important: The building and/or maintenance cost of the network can be reduced by introducing the oscillating condition, and when the phase-lag among the outputs is large, the transportation loss can also be reduced. We use stability analysis to reveal how the system exhibits various topologies depending on the parameter. PMID:24586616

  17. Design and simulation of imaging algorithm for Fresnel telescopy imaging system

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-yu; Liu, Li-ren; Yan, Ai-min; Sun, Jian-feng; Dai, En-wen; Li, Bing

    2011-06-01

    Fresnel telescopy (short for Fresnel telescopy full-aperture synthesized imaging ladar) is a new high resolution active laser imaging technique. This technique is a variant of Fourier telescopy and optical scanning holography, which uses Fresnel zone plates to scan target. Compare with synthetic aperture imaging ladar(SAIL), Fresnel telescopy avoids problem of time synchronization and space synchronization, which decreasing technical difficulty. In one-dimensional (1D) scanning operational mode for moving target, after time-to-space transformation, spatial distribution of sampling data is non-uniform because of the relative motion between target and scanning beam. However, as we use fast Fourier transform (FFT) in the following imaging algorithm of matched filtering, distribution of data should be regular and uniform. We use resampling interpolation to transform the data into two-dimensional (2D) uniform distribution, and accuracy of resampling interpolation process mainly affects the reconstruction results. Imaging algorithms with different resampling interpolation algorithms have been analysis and computer simulation are also given. We get good reconstruction results of the target, which proves that the designed imaging algorithm for Fresnel telescopy imaging system is effective. This work is found to have substantial practical value and offers significant benefit for high resolution imaging system of Fresnel telescopy laser imaging ladar.

  18. Design and Evaluation of a Dynamic Programming Flight Routing Algorithm Using the Convective Weather Avoidance Model

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Grabbe, Shon; Mukherjee, Avijit

    2010-01-01

    The optimization of traffic flows in congested airspace with varying convective weather is a challenging problem. One approach is to generate shortest routes between origins and destinations while meeting airspace capacity constraint in the presence of uncertainties, such as weather and airspace demand. This study focuses on development of an optimal flight path search algorithm that optimizes national airspace system throughput and efficiency in the presence of uncertainties. The algorithm is based on dynamic programming and utilizes the predicted probability that an aircraft will deviate around convective weather. It is shown that the running time of the algorithm increases linearly with the total number of links between all stages. The optimal routes minimize a combination of fuel cost and expected cost of route deviation due to convective weather. They are considered as alternatives to the set of coded departure routes which are predefined by FAA to reroute pre-departure flights around weather or air traffic constraints. A formula, which calculates predicted probability of deviation from a given flight path, is also derived. The predicted probability of deviation is calculated for all path candidates. Routes with the best probability are selected as optimal. The predicted probability of deviation serves as a computable measure of reliability in pre-departure rerouting. The algorithm can also be extended to automatically adjust its design parameters to satisfy the desired level of reliability.

  19. A requirements specification for a software design support system

    NASA Technical Reports Server (NTRS)

    Noonan, Robert E.

    1988-01-01

    Most existing software design systems (SDSS) support the use of only a single design methodology. A good SDSS should support a wide variety of design methods and languages including structured design, object-oriented design, and finite state machines. It might seem that a multiparadigm SDSS would be expensive in both time and money to construct. However, it is proposed that instead an extensible SDSS that directly implements only minimal database and graphical facilities be constructed. In particular, it should not directly implement tools to faciliate language definition and analysis. It is believed that such a system could be rapidly developed and put into limited production use, with the experience gained used to refine and evolve the systems over time.

  20. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  1. 78 FR 28258 - mPower\\TM\\ Design-Specific Review Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-14

    ... COMMISSION mPower\\TM\\ Design-Specific Review Standard AGENCY: Nuclear Regulatory Commission. ACTION: Design-Specific Review Standard (DSRS) for the mPower\\TM\\ Design; request for comment. SUMMARY: The U.S. Nuclear... the mPower\\TM\\ design (mPower\\TM\\ DSRS). The purpose of the mPower\\TM\\ DSRS is to more fully...

  2. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    DOE PAGES

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.; Castaing, Jeremy

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less

  3. A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks

    PubMed Central

    Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza

    2015-01-01

    Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower’s problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach. PMID:26102502

  4. A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train

    PubMed Central

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582

  5. A new stochastic algorithm for proton exchange membrane fuel cell stack design optimization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Uttara

    2012-10-01

    This paper develops a new stochastic heuristic for proton exchange membrane fuel cell stack design optimization. The problem involves finding the optimal size and configuration of stand-alone, fuel-cell-based power supply systems: the stack is to be configured so that it delivers the maximum power output at the load's operating voltage. The problem apparently looks straightforward but is analytically intractable and computationally hard. No exact solution can be found, nor is it easy to find the exact number of local optima; we, therefore, are forced to settle with approximate or near-optimal solutions. This real-world problem, first reported in Journal of Power Sources 131, poses both engineering challenges and computational challenges and is representative of many of today's open problems in fuel cell design involving a mix of discrete and continuous parameters. The new algorithm is compared against genetic algorithm, simulated annealing, and (1+1)-EA. Statistical tests of significance show that the results produced by our method are better than the best-known solutions for this problem published in the literature. A finite Markov chain analysis of the new algorithm establishes an upper bound on the expected time to find the optimum solution.

  6. A high precision position sensor design and its signal processing algorithm for a maglev train.

    PubMed

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.

  7. A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks.

    PubMed

    Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza

    2015-01-01

    Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower's problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach.

  8. NASA software specification and evaluation system design, part 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The research to develop methods for reducing the effort expended in software and verification is reported. The development of a formal software requirements methodology, a formal specifications language, a programming language, a language preprocessor, and code analysis tools are discussed.

  9. Specific volume coupling and convergence properties in hybrid particle/finite volume algorithms for turbulent reactive flows

    NASA Astrophysics Data System (ADS)

    Popov, Pavel P.; Wang, Haifeng; Pope, Stephen B.

    2015-08-01

    We investigate the coupling between the two components of a Large Eddy Simulation/Probability Density Function (LES/PDF) algorithm for the simulation of turbulent reacting flows. In such an algorithm, the Large Eddy Simulation (LES) component provides a solution to the hydrodynamic equations, whereas the Lagrangian Monte Carlo Probability Density Function (PDF) component solves for the PDF of chemical compositions. Special attention is paid to the transfer of specific volume information from the PDF to the LES code: the specific volume field contains probabilistic noise due to the nature of the Monte Carlo PDF solution, and thus the use of the specific volume field in the LES pressure solver needs careful treatment. Using a test flow based on the Sandia/Sydney Bluff Body Flame, we determine the optimal strategy for specific volume feedback. Then, the overall second-order convergence of the entire LES/PDF procedure is verified using a simple vortex ring test case, with special attention being given to bias errors due to the number of particles per LES Finite Volume (FV) cell.

  10. Enhanced detection criteria in implantable cardioverter defibrillators: sensitivity and specificity of the stability algorithm at different heart rates.

    PubMed

    Kettering, K; Dörnberger, V; Lang, R; Vonthein, R; Suchalla, R; Bosch, R F; Mewis, C; Eigenberger, B; Kühlkamp, V

    2001-09-01

    The lack of specificity in the detection of ventricular tachyarrhythmias remains a major clinical problem in the therapy with ICDs. The stability criterion has been shown to be useful in discriminating ventricular tachyarrhythmias characterized by a small variation in cycle lengths from AF with rapid ventricular response presenting a higher degree of variability of RR intervals. But RR variability decreases with increasing heart rate during AF. Therefore, the aim of the study was to determine if the sensitivity and specificity of the STABILITY algorithm for spontaneous tachyarrhythmias is related to ventricular rate. Forty-two patients who had received an ICD (CPI Ventak Mini I, II, III or Ventak AV) were enrolled in the study. Two hundred ninety-eight episodes of AF with rapid ventricular response and 817 episodes of ventricular tachyarrhythmias were analyzed. Sensitivity and specificity in the detection of ventricular tachyarrhythmias were calculated at different heart rates. When a stability value of 30 ms was programmed the result was a sensitivity of 82.7% and a specificity of 91.4% in the detection of slow ventricular tachyarrhythmias (heart rate < 150 beats/min). When faster ventricular tachyarrhythmias with rates between 150 and 169 beats/min (170-189 beats/min) were analyzed, a stability value of 30 ms provided a sensitivity of 94.5% (94.7%) and a specificity of 76.5% (54.0%). For arrhythmia episodes > or = 190 beats/min, the same stability value resulted in a sensitivity of 78.2% and a specificity of 41.0%. Even when other stability values were taken into consideration, no acceptable sensitivity/specificity values could be obtained in this subgroup. RR variability decreases with increasing heart rate during AF while RR variability remains almost constant at different cycle lengths during ventricular tachyarrhythmias. Thus, acceptable performance of the STABILITY algorithm appears to be limited to ventricular rate zones < 170 beats/min.

  11. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  12. Binary particle swarm optimization algorithm assisted to design of plasmonic nanospheres sensor

    NASA Astrophysics Data System (ADS)

    Kaboli, Milad; Akhlaghi, Majid; Shahmirzaee, Hossein

    2016-04-01

    In this study, a coherent perfect absorption (CPA)-type sensor based on plasmonic nanoparticles is proposed. It consists of a plasmonic nanospheres array on top of a quartz substrate. The refractive index changes above the sensor surface, which is due to the appearance of gas or the absorption of biomolecules, can be detected by measuring the resulting spectral shifts of the absorption coefficient. Since the CPA efficiency depends strongly on the number of plasmonic nanoparticles and the locations of nanoparticles, binary particle swarm optimization (BPSO) algorithm is used to design an optimized array of the plasmonic nanospheres. This optimized structure should be maximizing the absorption coefficient only in the one frequency. BPSO algorithm, a swarm of birds including a matrix with binary entries responsible for controlling nanospheres in the array, shows the presence with symbol of ('1') and the absence with ('0'). The sensor can be used for sensing both gas and low refractive index materials in an aqueous environment.

  13. Reprogramming homing endonuclease specificity through computational design and directed evolution.

    PubMed

    Thyme, Summer B; Boissel, Sandrine J S; Arshiya Quadri, S; Nolan, Tony; Baker, Dean A; Park, Rachel U; Kusak, Lara; Ashworth, Justin; Baker, David

    2014-02-01

    Homing endonucleases (HEs) can be used to induce targeted genome modification to reduce the fitness of pathogen vectors such as the malaria-transmitting Anopheles gambiae and to correct deleterious mutations in genetic diseases. We describe the creation of an extensive set of HE variants with novel DNA cleavage specificities using an integrated experimental and computational approach. Using computational modeling and an improved selection strategy, which optimizes specificity in addition to activity, we engineered an endonuclease to cleave in a gene associated with Anopheles sterility and another to cleave near a mutation that causes pyruvate kinase deficiency. In the course of this work we observed unanticipated context-dependence between bases which will need to be mechanistically understood for reprogramming of specificity to succeed more generally. PMID:24270794

  14. Formal design specification of a Processor Interface Unit

    NASA Technical Reports Server (NTRS)

    Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.

    1992-01-01

    This report describes work to formally specify the requirements and design of a processor interface unit (PIU), a single-chip subsystem providing memory-interface bus-interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance-free operation, or both. The need for high-quality design assurance in such applications is an undisputed fact, given the disastrous consequences that even a single design flaw can produce. Thus, the further development and application of formal methods to fault-tolerant systems is of critical importance as these systems see increasing use in modern society.

  15. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    PubMed

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The

  16. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications

    PubMed Central

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The

  17. Double images encryption method with resistance against the specific attack based on an asymmetric algorithm.

    PubMed

    Wang, Xiaogang; Zhao, Daomu

    2012-05-21

    A double-image encryption technique that based on an asymmetric algorithm is proposed. In this method, the encryption process is different from the decryption and the encrypting keys are also different from the decrypting keys. In the nonlinear encryption process, the images are encoded into an amplitude cyphertext, and two phase-only masks (POMs) generated based on phase truncation are kept as keys for decryption. By using the classical double random phase encoding (DRPE) system, the primary images can be collected by an intensity detector that located at the output plane. Three random POMs that applied in the asymmetric encryption can be safely applied as public keys. Simulation results are presented to demonstrate the validity and security of the proposed protocol.

  18. Specifications for a COM Catalog Designed for Government Documents.

    ERIC Educational Resources Information Center

    Copeland, Nora S.; And Others

    Prepared in MARC format in accordance with the Ohio College Library Center (OCLC) standards, these specifications were developed at Colorado State University to catalog a group of government publications not listed in the Monthly Catalog of United States Publications. The resulting microfiche catalog produced through the OCLC Cataloging Subsystem…

  19. Languages for Specific Purposes. Program Design and Evaluation.

    ERIC Educational Resources Information Center

    Mackay, Ronald, Ed.; Palmer, Joe Darwin, Ed.

    This collection of research on curriculum and program development in languages for special purposes (LSP) contains the following papers: (1) "LSP Curriculum Development--From Policy to Practice," by Ronald Mackay and Maryse Bosquet; (2) "The Problem of Needs Assessment in English for Specific Purposes: Some Theoretical and Practical…

  20. As-built design specification for segment map (Sgmap) program

    NASA Technical Reports Server (NTRS)

    Tompkins, M. A. (Principal Investigator)

    1981-01-01

    The segment map program (SGMAP), which is part of the CLASFYT package, is described in detail. This program is designed to output symbolic maps or numerical dumps from LANDSAT cluster/classification files or aircraft ground truth/processed ground truth files which are in 'universal' format.

  1. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... in an explosive atmosphere must be approved by an independent laboratory as components that... cause formation of static electricity. (e) A monitor must be designed to operate in each plane that... warning signal and a signal that can be used to actuate valves in a vessel's fixed piping system, when—...

  2. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... in an explosive atmosphere must be approved by an independent laboratory as components that... cause formation of static electricity. (e) A monitor must be designed to operate in each plane that... warning signal and a signal that can be used to actuate valves in a vessel's fixed piping system, when—...

  3. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... in an explosive atmosphere must be approved by an independent laboratory as components that... cause formation of static electricity. (e) A monitor must be designed to operate in each plane that... warning signal and a signal that can be used to actuate valves in a vessel's fixed piping system, when—...

  4. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... in an explosive atmosphere must be approved by an independent laboratory as components that... cause formation of static electricity. (e) A monitor must be designed to operate in each plane that... warning signal and a signal that can be used to actuate valves in a vessel's fixed piping system, when—...

  5. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... in an explosive atmosphere must be approved by an independent laboratory as components that... cause formation of static electricity. (e) A monitor must be designed to operate in each plane that... warning signal and a signal that can be used to actuate valves in a vessel's fixed piping system, when—...

  6. Algorithms for sum-of-squares-based stability analysis and control design of uncertain nonlinear systems

    NASA Astrophysics Data System (ADS)

    Ataei-Esfahani, Armin

    In this dissertation, we present algorithmic procedures for sum-of-squares based stability analysis and control design for uncertain nonlinear systems. In particular, we consider the case of robust aircraft control design for a hypersonic aircraft model subject to parametric uncertainties in its aerodynamic coefficients. In recent years, Sum-of-Squares (SOS) method has attracted increasing interest as a new approach for stability analysis and controller design of nonlinear dynamic systems. Through the application of SOS method, one can describe a stability analysis or control design problem as a convex optimization problem, which can efficiently be solved using Semidefinite Programming (SDP) solvers. For nominal systems, the SOS method can provide a reliable and fast approach for stability analysis and control design for low-order systems defined over the space of relatively low-degree polynomials. However, The SOS method is not well-suited for control problems relating to uncertain systems, specially those with relatively high number of uncertainties or those with non-affine uncertainty structure. In order to avoid issues relating to the increased complexity of the SOS problems for uncertain system, we present an algorithm that can be used to transform an SOS problem with uncertainties into a LMI problem with uncertainties. A new Probabilistic Ellipsoid Algorithm (PEA) is given to solve the robust LMI problem, which can guarantee the feasibility of a given solution candidate with an a-priori fixed probability of violation and with a fixed confidence level. We also introduce two approaches to approximate the robust region of attraction (RROA) for uncertain nonlinear systems with non-affine dependence on uncertainties. The first approach is based on a combination of PEA and SOS method and searches for a common Lyapunov function, while the second approach is based on the generalized Polynomial Chaos (gPC) expansion theorem combined with the SOS method and searches

  7. Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak

    2010-01-01

    Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions

  8. Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.

    PubMed

    Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal

    2013-11-01

    In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters.

  9. Adaptive filter design based on the LMS algorithm for delay elimination in TCR/FC compensators.

    PubMed

    Hooshmand, Rahmat Allah; Torabian Esfahani, Mahdi

    2011-04-01

    Thyristor controlled reactor with fixed capacitor (TCR/FC) compensators have the capability of compensating reactive power and improving power quality phenomena. Delay in the response of such compensators degrades their performance. In this paper, a new method based on adaptive filters (AF) is proposed in order to eliminate delay and increase the response of the TCR compensator. The algorithm designed for the adaptive filters is performed based on the least mean square (LMS) algorithm. In this design, instead of fixed capacitors, band-pass LC filters are used. To evaluate the filter, a TCR/FC compensator was used for nonlinear and time varying loads of electric arc furnaces (EAFs). These loads caused occurrence of power quality phenomena in the supplying system, such as voltage fluctuation and flicker, odd and even harmonics and unbalancing in voltage and current. The above design was implemented in a realistic system model of a steel complex. The simulation results show that applying the proposed control in the TCR/FC compensator efficiently eliminated delay in the response and improved the performance of the compensator in the power system.

  10. Adaptive filter design based on the LMS algorithm for delay elimination in TCR/FC compensators.

    PubMed

    Hooshmand, Rahmat Allah; Torabian Esfahani, Mahdi

    2011-04-01

    Thyristor controlled reactor with fixed capacitor (TCR/FC) compensators have the capability of compensating reactive power and improving power quality phenomena. Delay in the response of such compensators degrades their performance. In this paper, a new method based on adaptive filters (AF) is proposed in order to eliminate delay and increase the response of the TCR compensator. The algorithm designed for the adaptive filters is performed based on the least mean square (LMS) algorithm. In this design, instead of fixed capacitors, band-pass LC filters are used. To evaluate the filter, a TCR/FC compensator was used for nonlinear and time varying loads of electric arc furnaces (EAFs). These loads caused occurrence of power quality phenomena in the supplying system, such as voltage fluctuation and flicker, odd and even harmonics and unbalancing in voltage and current. The above design was implemented in a realistic system model of a steel complex. The simulation results show that applying the proposed control in the TCR/FC compensator efficiently eliminated delay in the response and improved the performance of the compensator in the power system. PMID:21193194

  11. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  12. The design of ROM-type holographic memory with iterative Fourier transform algorithm

    NASA Astrophysics Data System (ADS)

    Akamatsu, Hideki; Yamada, Kai; Unno, Noriyuki; Yoshida, Shuhei; Taniguchi, Jun; Yamamoto, Manabu

    2013-03-01

    The research and development of the holographic data storage (HDS) is advanced, as one of the high-speed, mass storage systems of the next generation. Recently, along the development of the write-once system that uses photopolymer media, large capacity ROM type HDS which can replace conventional optical discs becomes important. In this study, we develop the ROM type HDS using a diffractive optical element (DOE), and verify the effectiveness of our approach. In order to design DOE, iterative Fourier transform algorithm was adopted, and DOE is fabricated with electron beam (EB) cutting and nanoimprint lithography. We optimize the phase distribution of the hologram by iterative Fourier transform algorithm known as Gerchberg-Saxton (GS) algorithm with the angular spectrum method. In the fabrication process, the phase distribution of the hologram is implicated as the concavity and convexity structure by the EB cutting and transcribed with nanoimprint lithography. At this time, the mold is formed as multiple-stage concavity and convexity. The purpose of multiple-stage concavity and convexity is to obtain high diffraction efficiency and signal-to-noise ratio (SNR). Fabricated trial model DOE is evaluated by the experiment.

  13. NASIS data base management system: IBM 360 TSS implementation. Volume 4: Program design specifications

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The design specifications for the programs and modules within the NASA Aerospace Safety Information System (NASIS) are presented. The purpose of the design specifications is to standardize the preparation of the specifications and to guide the program design. Each major functional module within the system is a separate entity for documentation purposes. The design specifications contain a description of, and specifications for, all detail processing which occurs in the module. Sub-models, reference tables, and data sets which are common to several modules are documented separately.

  14. NASIS data base management system - IBM 360/370 OS MVT implementation. 4: Program design specifications

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The design specifications for the programs and modules within the NASA Aerospace Safety Information System (NASIS) are presented. The purpose of the design specifications is to standardize the preparation of the specifications and to guide the program design. Each major functional module within the system is a separate entity for documentation purposes. The design specifications contain a description of, and specifications for, all detail processing which occurs in the module. Sub-modules, reference tables, and data sets which are common to several modules are documented separately.

  15. Septa design for a prostate specific PET camera

    SciTech Connect

    Qi, Jinyi; Huber, Jennifer S.; Huesman, Ronald H.; Moses, William W.; Derenzo, Stephen E.; Budinger, Thomas F.

    2003-11-15

    The recent development of new prostate tracers has motivated us to build a low cost PET camera optimized to image the prostate. Coincidence imaging of positron emitters is achieved using a pair of external curved detector banks. The bottom bank is fixed below the patient bed, and the top bank moves upward for patient access and downward for maximum sensitivity. In this paper, we study the design of septa for the prostate camera using Monte Carlo simulations. The system performance is measured by the detectability of a prostate lesion. We have studied 17 septa configurations. The results show that the design of septa has a large impact on the lesion detection at a given activity concentration. Significant differences are also observed between the lesion detectability and the conventional noise equivalent count (NEC) performance, indicating that the NEC is not appropriate for the detection task.

  16. A review of design issues specific to hypersonic flight vehicles

    NASA Astrophysics Data System (ADS)

    Sziroczak, D.; Smith, H.

    2016-07-01

    This paper provides an overview of the current technical issues and challenges associated with the design of hypersonic vehicles. Two distinct classes of vehicles are reviewed; Hypersonic Transports and Space Launchers, their common features and differences are examined. After a brief historical overview, the paper takes a multi-disciplinary approach to these vehicles, discusses various design aspects, and technical challenges. Operational issues are explored, including mission profiles, current and predicted markets, in addition to environmental effects and human factors. Technological issues are also reviewed, focusing on the three major challenge areas associated with these vehicles: aerothermodynamics, propulsion, and structures. In addition, matters of reliability and maintainability are also presented. The paper also reviews the certification and flight testing of these vehicles from a global perspective. Finally the current stakeholders in the field of hypersonic flight are presented, summarizing the active programs and promising concepts.

  17. Multi-Stage Hybrid Rocket Conceptual Design for Micro-Satellites Launch using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kitagawa, Yosuke; Kitagawa, Koki; Nakamiya, Masaki; Kanazaki, Masahiro; Shimada, Toru

    The multi-objective genetic algorithm (MOGA) is applied to the multi-disciplinary conceptual design problem for a three-stage launch vehicle (LV) with a hybrid rocket engine (HRE). MOGA is an optimization tool used for multi-objective problems. The parallel coordinate plot (PCP), which is a data mining method, is employed in the post-process in MOGA for design knowledge discovery. A rocket that can deliver observing micro-satellites to the sun-synchronous orbit (SSO) is designed. It consists of an oxidizer tank containing liquid oxidizer, a combustion chamber containing solid fuel, a pressurizing tank and a nozzle. The objective functions considered in this study are to minimize the total mass of the rocket and to maximize the ratio of the payload mass to the total mass. To calculate the thrust and the engine size, the regression rate is estimated based on an empirical model for a paraffin (FT-0070) propellant. Several non-dominated solutions are obtained using MOGA, and design knowledge is discovered for the present hybrid rocket design problem using a PCP analysis. As a result, substantial knowledge on the design of an LV with an HRE is obtained for use in space transportation.

  18. Robust nonlinear dynamic inversion flight control design using structured singular value synthesis based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ying, Sibin; Ai, Jianliang; Luo, Changhang; Wang, Peng

    2006-11-01

    Non-linear Dynamic Inversion (NDI) is a technique for control law design, which is based on the feedback linearization and achieving desired dynamic response characteristics. NDI requires an ideal and precise model, however, there must be some errors due to the modeling error or actuator faults, therefore the control law designed by NDI has less robustness. Combining with structured singular value μ synthesis method, the system's robustness can be improved notably. The designed controller, which uses the structured singular value μ synthesis method, has high dimensions, and the dimensions must be reduced when we calculate it. This paper presents a new method for the design of robust flight control, which uses structured singular value μ synthesis based on genetic algorithm. The designed controller, which uses this method, can reduce the dimensions obviously compared with the normal method of structured singular value synthesis, so it is easier for application. The presented method is applied to robustness controller design of some super maneuverable fighter. The simulation results show that the dynamic inversion control law achieves a high level of performance in post-stall maneuver condition, and the whole control system has perfect robustness and anti-disturbance ability.

  19. 14 CFR 91.705 - Operations within airspace designated as Minimum Navigation Performance Specification Airspace.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Minimum Navigation Performance Specification Airspace. 91.705 Section 91.705 Aeronautics and Space FEDERAL... Operations within airspace designated as Minimum Navigation Performance Specification Airspace. (a) Except as... airspace designated as Minimum Navigation Performance Specifications airspace unless— (1) The aircraft...

  20. Effect of Selection of Design Parameters on the Optimization of a Horizontal Axis Wind Turbine via Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Alpman, Emre

    2014-06-01

    The effect of selecting the twist angle and chord length distributions on the wind turbine blade design was investigated by performing aerodynamic optimization of a two-bladed stall regulated horizontal axis wind turbine. Twist angle and chord length distributions were defined using Bezier curve using 3, 5, 7 and 9 control points uniformly distributed along the span. Optimizations performed using a micro-genetic algorithm with populations composed of 5, 10, 15, 20 individuals showed that, the number of control points clearly affected the outcome of the process; however the effects were different for different population sizes. The results also showed the superiority of micro-genetic algorithm over a standard genetic algorithm, for the selected population sizes. Optimizations were also performed using a macroevolutionary algorithm and the resulting best blade design was compared with that yielded by micro-genetic algorithm.

  1. 78 FR 52804 - mPower\\TM\\ Design-Specific Review Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-26

    ... revision of the Standard Review Plan. C. Re-Opening of Comment Period On May 14, 2013 (78 FR 28258), the... COMMISSION mPower\\TM\\ Design-Specific Review Standard AGENCY: Nuclear Regulatory Commission. ACTION: Design-Specific Review Standard (DSRS) for the mPower\\TM\\ Design; re-opening of comment period. SUMMARY: On May...

  2. 48 CFR 2052.227-70 - Drawings, designs, specifications, and other data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Drawings, designs... Clauses 2052.227-70 Drawings, designs, specifications, and other data. As prescribed at 2027.305-70, the..., designs, specifications, and other data will be developed and the NRC must retain full rights to...

  3. Design of an automated algorithm for labeling cardiac blood pool in gated SPECT images of radiolabeled red blood cells

    SciTech Connect

    Hebert, T.J. |; Moore, W.H.; Dhekne, R.D.; Ford, P.V.; Wendt, J.A.; Murphy, P.H.; Ting, Y.

    1996-08-01

    The design of an automated computer algorithm for labeling the cardiac blood pool within gated 3-D reconstructions of the radiolabeled red blood cells is investigated. Due to patient functional abnormalities, limited resolution, and noise, certain spatial and temporal features of the cardiac blood pool that one would anticipate finding in every study are not present in certain frames or with certain patients. The labeling of the cardiac blood pool requires an algorithm that only relies upon features present in all patients. The authors investigate the design of a fully-automated region growing algorithm for this purpose.

  4. Restraint system design and evaluation for military specific applications

    NASA Astrophysics Data System (ADS)

    Karwaczynski, Sebastian

    This research focuses on designing an optimal restraint system for usage in a military vehicle applications. The designed restraint system must accommodate a wide range of DHM's and ATD's with and without PPE such as: helmet, boots, and body armor. The evaluation of the restraint systems were conducted in a simulated vehicle environment, which was utilized to downselect the ideal restraint system for this program. In December of 2011 the OCP TECD program was formulated to increase occupant protection. To do this, 3D computer models were created to accommodate the entire Soldier population in the Army. These models included the entire PPE, which were later utilized for space claim activities and for designing new seats and restraints, which would accommodate them. Additionally, guidelines to increase protection levels while providing optimal comfort to the Soldier were created. The current and emerging threats were evaluated and focused on at the time of the program inception. Throughout this program various activities were conducted for restraint downselection including Soldier evaluations of various restraint system configurations. The Soldiers were given an opportunity to evaluate each system in a representative seat, which allowed them to position themselves in a manner consistent with the mission requirements. Systems ranged from fully automated to manual adjustment type systems. An evaluation of each particular system was conducted and analyzed against the other systems. It was discovered that the restraint systems, which utilize retractors allowed for automatic webbing stowage and allowed for easier access and repeatability when donning and doffing the restraint. It was also found that when an aid was introduced to help the Soldier don the restraint, it was more likely that such system would be utilized. Restraints were evaluated in drop tower experiments in addition to actual blast tests. An evaluation with this amount of detail had not been attempted

  5. Advanced Free Flight Planner and Dispatcher's Workstation: Preliminary Design Specification

    NASA Technical Reports Server (NTRS)

    Wilson, J.; Wright, C.; Couluris, G. J.

    1997-01-01

    The National Aeronautics and Space Administration (NASA) has implemented the Advanced Air Transportation Technology (AATT) program to investigate future improvements to the national and international air traffic management systems. This research, as part of the AATT program, developed preliminary design requirements for an advanced Airline Operations Control (AOC) dispatcher's workstation, with emphasis on flight planning. This design will support the implementation of an experimental workstation in NASA laboratories that would emulate AOC dispatch operations. The work developed an airline flight plan data base and specified requirements for: a computer tool for generation and evaluation of free flight, user preferred trajectories (UPT); the kernel of an advanced flight planning system to be incorporated into the UPT-generation tool; and an AOC workstation to house the UPT-generation tool and to provide a real-time testing environment. A prototype for the advanced flight plan optimization kernel was developed and demonstrated. The flight planner uses dynamic programming to search a four-dimensional wind and temperature grid to identify the optimal route, altitude and speed for successive segments of a flight. An iterative process is employed in which a series of trajectories are successively refined until the LTPT is identified. The flight planner is designed to function in the current operational environment as well as in free flight. The free flight environment would enable greater flexibility in UPT selection based on alleviation of current procedural constraints. The prototype also takes advantage of advanced computer processing capabilities to implement more powerful optimization routines than would be possible with older computer systems.

  6. Home Care Nursing via Computer Networks: Justification and Design Specifications

    PubMed Central

    Brennan, Patricia Flatley

    1988-01-01

    High-tech home care includes the use of information technologies, such as computer networks, to provide direct care to patients in the home. This paper presents the justification and design of a project using a free, public access computer network to deliver home care nursing. The intervention attempts to reduce isolation and improve problem solving among home care patients and their informal caregivers. Three modules comprise the intervention: a decision module, a communications module, and an information data base. This paper describes the experimental evaluation of the project, and discusses issues in the delivery of nursing care via computers.

  7. Selecting training and test images for optimized anomaly detection algorithms in hyperspectral imagery through robust parameter design

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2011-06-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques have been applied to some of these algorithms in an attempt to choose robust settings capable of operating consistently across a large variety of image scenes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research developed a frameworkfor optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. This paper describes a method for selecting hyperspectral image training and test subsets yielding consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. Several different mathematical models representing the value of a training and test set based on such measures as the D-optimal score and various distance norms are tested in a simulation experiment.

  8. Optimal design of viscous damper connectors for adjacent structures using genetic algorithm and Nelder-Mead algorithm

    NASA Astrophysics Data System (ADS)

    Bigdeli, Kasra; Hare, Warren; Tesfamariam, Solomon

    2012-04-01

    Passive dampers can be used to connect two adjacent structures in order to mitigate earthquakes induced pounding damages. Theoretical and experimental studies have confirmed efficiency and applicability of various connecting devices, such as viscous damper, MR damper, etc. However, few papers employed optimization methods to find the optimal mechanical properties of the dampers, and in most papers, dampers are assumed to be uniform. In this study, we optimized the optimal damping coefficients of viscous dampers considering a general case of non-uniform damping coefficients. Since the derivatives of objective function to damping coefficients are not known, to optimize damping coefficients, a heuristic search method, i.e. the genetic algorithm, is employed. Each structure is modeled as a multi degree of freedom dynamic system consisting of lumped-masses, linear springs and dampers. In order to examine dynamic behavior of the structures, simulations in frequency domain are carried out. A pseudo-excitation based on Kanai-Tajimi spectrum is used as ground acceleration. The optimization results show that relaxing the uniform dampers coefficient assumption generates significant improvement in coupling effectiveness. To investigate efficiency of genetic algorithm, solution quality and solution time of genetic algorithm are compared with those of Nelder-Mead algorithm.

  9. Application of a Modified Garbage Code Algorithm to Estimate Cause-Specific Mortality and Years of Life Lost in Korea

    PubMed Central

    2016-01-01

    Years of life lost (YLLs) are estimated based on mortality and cause of death (CoD); therefore, it is necessary to accurately calculate CoD to estimate the burden of disease. The garbage code algorithm was developed by the Global Burden of Disease (GBD) Study to redistribute inaccurate CoD and enhance the validity of CoD estimation. This study aimed to estimate cause-specific mortality rates and YLLs in Korea by applying a modified garbage code algorithm. CoD data for 2010–2012 were used to calculate the number of deaths. The garbage code algorithm was then applied to calculate target cause (i.e., valid CoD) and adjusted CoD using the garbage code redistribution. The results showed that garbage code deaths accounted for approximately 25% of all CoD during 2010–2012. In 2012, lung cancer contributed the most to cause-specific death according to the Statistics Korea. However, when CoD was adjusted using the garbage code redistribution, ischemic heart disease was the most common CoD. Furthermore, before garbage code redistribution, self-harm contributed the most YLLs followed by lung cancer and liver cancer; however, after application of the garbage code redistribution, though self-harm was the most common leading cause of YLL, it is followed by ischemic heart disease and lung cancer. Our results showed that garbage code deaths accounted for a substantial amount of mortality and YLLs. The results may enhance our knowledge of burden of disease and help prioritize intervention settings by changing the relative importance of burden of disease. PMID:27775249

  10. A candidate-set-free algorithm for generating D-optimal split-plot designs

    PubMed Central

    Jones, Bradley; Goos, Peter

    2007-01-01

    We introduce a new method for generating optimal split-plot designs. These designs are optimal in the sense that they are efficient for estimating the fixed effects of the statistical model that is appropriate given the split-plot design structure. One advantage of the method is that it does not require the prior specification of a candidate set. This makes the production of split-plot designs computationally feasible in situations where the candidate set is too large to be tractable. The method allows for flexible choice of the sample size and supports inclusion of both continuous and categorical factors. The model can be any linear regression model and may include arbitrary polynomial terms in the continuous factors and interaction terms of any order. We demonstrate the usefulness of this flexibility with a 100-run polypropylene experiment involving 11 factors where we found a design that is substantially more efficient than designs that are produced by using other approaches. PMID:21197132

  11. Patient non-specific algorithm for seizures detection in scalp EEG.

    PubMed

    Orosco, Lorena; Correa, Agustina Garcés; Diez, Pablo; Laciar, Eric

    2016-04-01

    Epilepsy is a brain disorder that affects about 1% of the population in the world. Seizure detection is an important component in both the diagnosis of epilepsy and seizure control. In this work a patient non-specific strategy for seizure detection based on Stationary Wavelet Transform of EEG signals is developed. A new set of features is proposed based on an average process. The seizure detection consisted in finding the EEG segments with seizures and their onset and offset points. The proposed offline method was tested in scalp EEG records of 24-48h of duration of 18 epileptic patients. The method reached mean values of specificity of 99.9%, sensitivity of 87.5% and a false positive rate per hour of 0.9.

  12. Application of genetic algorithms to the optimization design of electron optical system

    NASA Astrophysics Data System (ADS)

    Gu, Changxin; Wu, M. Q.; Shan, Liying; Lin, G.

    2001-12-01

    The application of Genetic Algorithms (GAs) to the optimization design method, such as Simplex method and Powell method etc, can determine the final optimum structure and electric parameters of an electron optical system from given electron optical properties, but it may be landed in the localization of optimum search process. The GAs is a novel direct search optimization method based on principles of natural selection and survival of the fittest from natural evolution. Through the reproduction, crossover, and mutation iterative process, GAs can search the global optimum result. We applied the GAs to optimize an electron emission system and an extended field lens (EFL) respectively. The optimal structure and corresponding electrical parameters with a criterion of minimum objective function value, crossover radius for electron emission system and spherical aberration coefficient for EFL, have been searched and presented in this paper. The GAs, as a direct search method and an adaptive search technique, has significant advantage in the optimization design of electron optical systems.

  13. Application of Genetic Algorithm to the Design Optimization of Complex Energy Saving Glass Coating Structure

    NASA Astrophysics Data System (ADS)

    Johar, F. M.; Azmin, F. A.; Shibghatullah, A. S.; Suaidi, M. K.; Ahmad, B. H.; Abd Aziz, M. Z. A.; Salleh, S. N.; Shukor, M. Md

    2014-04-01

    Attenuation of GSM, GPS and personal communication signal leads to poor communication inside the building using regular shapes of energy saving glass coating. Thus, the transmission is very low. A brand new type of band pass frequency selective surface (FSS) for energy saving glass application is presented in this paper for one unit cell. Numerical Periodic Method of Moment approach according to a previous study has been applied to determine the new optimum design of one unit cell energy saving glass coating structure. Optimization technique based on the Genetic Algorithm (GA) is used to obtain an improved in return loss and transmission signal. The unit cell of FSS is designed and simulated using the CST Microwave Studio software at based on industrial, scientific and medical bands (ISM). A unique and irregular shape of an energy saving glass coating structure is obtained with lower return loss and improved transmission coefficient.

  14. Optimal Design of Wind-PV-Diesel-Battery System using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Suryoatmojo, Heri; Hiyama, Takashi; Elbaset, Adel A.; Ashari, Mochamad

    Application of diesel generators to supply the load demand on isolated islands in Indonesia has widely spread. With increases in oil price and the concerns about global warming, the integration of diesel generators with renewable energy systems have become an attractive energy sources for supplying the load demand. This paper performs an optimal design of integrated system involving Wind-PV-Diesel-Battery system for isolated island with CO2 emission evaluation by using genetic algorithm. The proposed system has been designed for the hybrid power generation in East Nusa Tenggara, Indonesia-latitude 09.30S, longitude 122.0E. From simulation results, the proposed system is able to minimize the total annual cost of the system under study and reduce CO2 emission generated by diesel generators.

  15. Fuzzy rule base design using tabu search algorithm for nonlinear system modeling.

    PubMed

    Bagis, Aytekin

    2008-01-01

    This paper presents an approach to fuzzy rule base design using tabu search algorithm (TSA) for nonlinear system modeling. TSA is used to evolve the structure and the parameter of fuzzy rule base. The use of the TSA, in conjunction with a systematic neighbourhood structure for the determination of fuzzy rule base parameters, leads to a significant improvement in the performance of the model. To demonstrate the effectiveness of the presented method, several numerical examples given in the literature are examined. The results obtained by means of the identified fuzzy rule bases are compared with those belonging to other modeling approaches in the literature. The simulation results indicate that the method based on the use of a TSA performs an important and very effective modeling procedure in fuzzy rule base design in the modeling of the nonlinear or complex systems. PMID:17945233

  16. Application of multiple imputation using the two-fold fully conditional specification algorithm in longitudinal clinical data

    PubMed Central

    Welch, Catherine; Bartlett, Jonathan; Petersen, Irene

    2014-01-01

    Electronic health records of longitudinal clinical data are a valuable resource for health care research. One obstacle of using databases of health records in epidemiological analyses is that general practitioners mainly record data if they are clinically relevant. We can use existing methods to handle missing data, such as multiple imputation (mi), if we treat the unavailability of measurements as a missing-data problem. Most software implementations of MI do not take account of the longitudinal and dynamic structure of the data and are difficult to implement in large databases with millions of individuals and long follow-up. Nevalainen, Kenward, and Virtanen (2009, Statistics in Medicine 28: 3657–3669) proposed the two-fold fully conditional specification algorithm to impute missing data in longitudinal data. It imputes missing values at a given time point, conditional on information at the same time point and immediately adjacent time points. In this article, we describe a new command, twofold, that implements the two-fold fully conditional specification algorithm. It is extended to accommodate MI of longitudinal clinical records in large databases. PMID:25420071

  17. System design and image processing algorithms for frequency domain optical coherence tomography in the coronary arteries

    NASA Astrophysics Data System (ADS)

    Adler, Desmond C.; Xu, Chenyang; Petersen, Christopher; Schmitt, Joseph M.

    2010-02-01

    We report on the design of a frequency domain optical coherence tomography (FD-OCT) system, fiber optic imaging catheter, and image processing algorithms for in vivo clinical use in the human coronary arteries. This technology represents the third generation of commercially-available OCT system developed at LightLab Imaging Inc. over the last ten years, enabling three-dimensional (3D) intravascular imaging at unprecedented speeds and resolutions for a commercial system. The FD-OCT engine is designed around an exclusively licensed micro-cavity swept laser that was co-developed with AXSUN Technologies Ltd. The laser's unique combination of high sweep rates, broad tuning ranges, and narrow linewidth enable imaging at 50,000 axial lines/s with an axial resolution of < 16 μm in tissue. The disposable 2.7 French (0.9 mm) imaging catheter provides a spot size of < 30 μm at a working distance of 2 mm. The catheter is rotated at 100 Hz and pulled back 50 mm at 20 mm/s to conduct a high-density spiral scan in 2.5 s. Image processing algorithms have been developed to provide clinically important measurements of vessel lumen dimensions, stent malapposition, and neointimal thickness. This system has been used in over 2000 procedures since August 2007 at over 40 clinical sites, providing cardiologists with an advanced tool for 3D assessment of the coronary arteries.

  18. Designing genetic algorithm for efficient calculation of value encoding in time-lapse gravity inversion

    NASA Astrophysics Data System (ADS)

    Wahyudi, Eko Januari

    2013-09-01

    As advancing application of soft computation technique in oil and gas industry, Genetic Algorithm (GA) also shows contribution in geophysical inverse problems in order to achieve better results and efficiency in computational process. In this paper, I would like to show the progress of my work in inverse modeling of time-lapse gravity data uses value encoding with alphabet formulation. The alphabet formulation designed to provide solution of characterization positive density change (+Δρ) and negative density change (-Δρ) respect to reference value (0 gr/cc). The inversion that utilize discrete model parameter, computed with GA as optimization algorithm. The challenge working with GA is take long time computational process, so the step in designing GA in this paper described through evaluation on GA operators performance test. The performances of several combinations of GA operators (selection, crossover, mutation, and replacement) tested with synthetic model in single-layer reservoir. Analysis on sufficient number of samples shows combination of SUS-MPCO-QSA/G-ND as the most promising results. Quantitative solution with more confidence level to characterize sharp boundary of density change zones was conducted with average calculation of sufficient model samples.

  19. Optimal design of groundwater remediation systems using a multi-objective fast harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Qiankun; Wu, Jianfeng; Sun, Xiaomin; Yang, Yun; Wu, Jichun

    2012-12-01

    A new multi-objective optimization methodology is developed, whereby a multi-objective fast harmony search (MOFHS) is coupled with a groundwater flow and transport model to search for optimal design of groundwater remediation systems under general hydrogeological conditions. The MOFHS incorporates the niche technique into the previously improved fast harmony search and is enhanced by adding the Pareto solution set filter and an elite individual preservation strategy to guarantee uniformity and integrity of the Pareto front of multi-objective optimization problems. Also, the operation library of individual fitness is introduced to improve calculation speed. Moreover, the MOFHS is coupled with the commonly used flow and transport codes MODFLOW and MT3DMS, to search for optimal design of pump-and-treat systems, aiming at minimization of the remediation cost and minimization of the mass remaining in aquifers. Compared with three existing multi-objective optimization methods, including the improved niched Pareto genetic algorithm (INPGA), the non-dominated sorting genetic algorithm II (NSGAII), and the multi-objective harmony search (MOHS), the proposed methodology then demonstrated its applicability and efficiency through a two-dimensional hypothetical test problem and a three-dimensional field problem in Indiana (USA).

  20. A fast loop-closure algorithm to accelerate residue matching in computational enzyme design.

    PubMed

    Xue, Jing; Huang, Xiaoqiang; Lin, Min; Zhu, Yushan

    2016-02-01

    Constructing an active site on an inert scaffold is still a challenge in chemical biology. Herein, we describe the incorporation of a Newton-direction-based fast loop-closure algorithm for catalytic residue matching into our enzyme design program ProdaMatch. This was developed to determine the sites and geometries of the catalytic residues as well as the position of the transition state with high accuracy in order to satisfy the geometric constraints on the interactions between catalytic residues and the transition state. Loop-closure results for 64,827 initial loops derived from 21 loops in the test set showed that 99.51% of the initial loops closed to within 0.05 Å in fewer than 400 iteration steps, while the large majority of the initial loops closed within 100 iteration steps. The revised version of ProdaMatch containing the novel loop-closure algorithm identified all native matches for ten scaffolds in the native active-site recapitulation test. Its high speed and accuracy when matching catalytic residues with a scaffold make this version of ProdaMatch potentially useful for scaffold selection through the incorporation of more complex theoretical enzyme models which may yield higher initial activities in de novo enzyme design.

  1. A smoothing monotonic convergent optimal control algorithm for nuclear magnetic resonance pulse sequence design

    NASA Astrophysics Data System (ADS)

    Maximov, Ivan I.; Salomon, Julien; Turinici, Gabriel; Nielsen, Niels Chr.

    2010-02-01

    The past decade has demonstrated increasing interests in using optimal control based methods within coherent quantum controllable systems. The versatility of such methods has been demonstrated with particular elegance within nuclear magnetic resonance (NMR) where natural separation between coherent and dissipative spin dynamics processes has enabled coherent quantum control over long periods of time to shape the experiment to almost ideal adoption to the spin system and external manipulations. This has led to new design principles as well as powerful new experimental methods within magnetic resonance imaging, liquid-state and solid-state NMR spectroscopy. For this development to continue and expand, it is crucially important to constantly improve the underlying numerical algorithms to provide numerical solutions which are optimally compatible with implementation on current instrumentation and at same time are numerically stable and offer fast monotonic convergence toward the target. Addressing such aims, we here present a smoothing monotonically convergent algorithm for pulse sequence design in magnetic resonance which with improved optimization stability lead to smooth pulse sequence easier to implement experimentally and potentially understand within the analytical framework of modern NMR spectroscopy.

  2. A smoothing monotonic convergent optimal control algorithm for nuclear magnetic resonance pulse sequence design.

    PubMed

    Maximov, Ivan I; Salomon, Julien; Turinici, Gabriel; Nielsen, Niels Chr

    2010-02-28

    The past decade has demonstrated increasing interests in using optimal control based methods within coherent quantum controllable systems. The versatility of such methods has been demonstrated with particular elegance within nuclear magnetic resonance (NMR) where natural separation between coherent and dissipative spin dynamics processes has enabled coherent quantum control over long periods of time to shape the experiment to almost ideal adoption to the spin system and external manipulations. This has led to new design principles as well as powerful new experimental methods within magnetic resonance imaging, liquid-state and solid-state NMR spectroscopy. For this development to continue and expand, it is crucially important to constantly improve the underlying numerical algorithms to provide numerical solutions which are optimally compatible with implementation on current instrumentation and at same time are numerically stable and offer fast monotonic convergence toward the target. Addressing such aims, we here present a smoothing monotonically convergent algorithm for pulse sequence design in magnetic resonance which with improved optimization stability lead to smooth pulse sequence easier to implement experimentally and potentially understand within the analytical framework of modern NMR spectroscopy. PMID:20192290

  3. An optimization algorithm for designing robust and simple antireflection films for organic photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Kubota, S.; Kanomata, K.; Momiyama, K.; Suzuki, T.; Hirose, F.

    2013-10-01

    We propose an optimization algorithm to design multilayer antireflection (AR) structure, which has robustness against variations in layer thicknesses, for organic photovoltaic cells. When a set of available materials are given, the proposed method searches for the material and thickness of each AR layer to maximize the short-circuit current density (Jsc). This algorithm allows for obtaining a set of solutions, including optimal and quasi-optimal solutions, at the same time, so that we can clearly make comparison between them. In addition, the effects of deviations in the thicknesses of the AR layers are examined for the (quasi-)optimal solutions obtained. The expectation of the decrease in the AR performance is estimated by calculating the changes in Jsc when the thicknesses of all AR layers are varied independently. We show that some of quasi-optimal solutions may have simpler layer configuration and can be more robust against the deviations in film thicknesses, than the optimal solution. This method indicates the importance of actively searching valuable, nonoptimal solutions for practical design of AR films. We also discuss the optical conditions that lead to light absorption in the back metal contact and the effects of changing active layer thicknesses.

  4. GPQuest: A Spectral Library Matching Algorithm for Site-Specific Assignment of Tandem Mass Spectra to Intact N-glycopeptides.

    PubMed

    Toghi Eshghi, Shadi; Shah, Punit; Yang, Weiming; Li, Xingde; Zhang, Hui

    2015-01-01

    Glycoprotein changes occur in not only protein abundance but also the occupancy of each glycosylation site by different glycoforms during biological or pathological processes. Recent advances in mass spectrometry instrumentation and techniques have facilitated analysis of intact glycopeptides in complex biological samples by allowing the users to generate spectra of intact glycopeptides with glycans attached to each specific glycosylation site. However, assigning these spectra, leading to identification of the glycopeptides, is challenging. Here, we report an algorithm, named GPQuest, for site-specific identification of intact glycopeptides using higher-energy collisional dissociation (HCD) fragmentation of complex samples. In this algorithm, a spectral library of glycosite-containing peptides in the sample was built by analyzing the isolated glycosite-containing peptides using HCD LC-MS/MS. Spectra of intact glycopeptides were selected by using glycan oxonium ions as signature ions for glycopeptide spectra. These oxonium-ion-containing spectra were then compared with the spectral library generated from glycosite-containing peptides, resulting in assignment of each intact glycopeptide MS/MS spectrum to a specific glycosite-containing peptide. The glycan occupying each glycosite was determined by matching the mass difference between the precursor ion of intact glycopeptide and the glycosite-containing peptide to a glycan database. Using GPQuest, we analyzed LC-MS/MS spectra of protein extracts from prostate tumor LNCaP cells. Without enrichment of glycopeptides from global tryptic peptides and at a false discovery rate of 1%, 1008 glycan-containing MS/MS spectra were assigned to 769 unique intact N-linked glycopeptides, representing 344 N-linked glycosites with 57 different N-glycans. Spectral library matching using GPQuest assigns the HCD LC-MS/MS generated spectra of intact glycopeptides in an automated and high-throughput manner. Additionally, spectral library

  5. Rational Design of CXCR4 Specific Antibodies with Elongated CDRs

    PubMed Central

    2015-01-01

    The bovine antibody (BLV1H12) which has an ultralong heavy chain complementarity determining region 3 (CDRH3) provides a novel scaffold for antibody engineering. By substituting the extended CDRH3 of BLV1H12 with modified CXCR4 binding peptides that adopt a β-hairpin conformation, we generated antibodies specifically targeting the ligand binding pocket of CXCR4 receptor. These engineered antibodies selectively bind to CXCR4 expressing cells with binding affinities in the low nanomolar range. In addition, they inhibit SDF-1-dependent signal transduction and cell migration in a transwell assay. Finally, we also demonstrate that a similar strategy can be applied to other CDRs and show that a CDRH2-peptide fusion binds CXCR4 with a Kd of 0.9 nM. This work illustrates the versatility of scaffold-based antibody engineering and could greatly expand the antibody functional repertoire in the future. PMID:25041362

  6. Multi-criteria optimal pole assignment robust controller design for uncertainty systems using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Sarjaš, Andrej; Chowdhury, Amor; Svečko, Rajko

    2016-09-01

    This paper presents the synthesis of an optimal robust controller design using the polynomial pole placement technique and multi-criteria optimisation procedure via an evolutionary computation algorithm - differential evolution. The main idea of the design is to provide a reliable fixed-order robust controller structure and an efficient closed-loop performance with a preselected nominally characteristic polynomial. The multi-criteria objective functions have quasi-convex properties that significantly improve convergence and the regularity of the optimal/sub-optimal solution. The fundamental aim of the proposed design is to optimise those quasi-convex functions with fixed closed-loop characteristic polynomials, the properties of which are unrelated and hard to present within formal algebraic frameworks. The objective functions are derived from different closed-loop criteria, such as robustness with metric ?∞, time performance indexes, controller structures, stability properties, etc. Finally, the design results from the example verify the efficiency of the controller design and also indicate broader possibilities for different optimisation criteria and control structures.

  7. Heat pipe design handbook, part 2. [digital computer code specifications

    NASA Technical Reports Server (NTRS)

    Skrabek, E. A.

    1972-01-01

    The utilization of a digital computer code for heat pipe analysis and design (HPAD) is described which calculates the steady state hydrodynamic heat transport capability of a heat pipe with a particular wick configuration, the working fluid being a function of wick cross-sectional area. Heat load, orientation, operating temperature, and heat pipe geometry are specified. Both one 'g' and zero 'g' environments are considered, and, at the user's option, the code will also perform a weight analysis and will calculate heat pipe temperature drops. The central porous slab, circumferential porous wick, arterial wick, annular wick, and axial rectangular grooves are the wick configurations which HPAD has the capability of analyzing. For Vol. 1, see N74-22569.

  8. A real-coded genetic algorithm applied to optimum design of a low solidity vaned diffuser for diffuser pump

    NASA Astrophysics Data System (ADS)

    Li, Jun; Tsukamoto, Hiroshi

    2001-10-01

    A numerical procedure for hydrodynamic redesign of the conventional vaned diffuser into the low solidity vaned diffuser by means of a real-coded genetic algorithm with Boltzmann, Tournament and Roulette Wheel selection is presented. In the first part, an investigation on the relative efficiency of the different real-coded genetic algorithm is carried out on a typical mathematical test function. The real-coded genetic algorithm with Boltzmann selection shows the best optimization performance compared to the Tournament and Roulette Wheel selection. In the second part, an approach to redesign the vaned diffuser profile is introduced. Goal of the optimum design is to search the highest static pressure recovery coefficient and low solidity vaned diffuser. The result of the low solidity vaned diffuser optimum design confirms that the efficiency and optimization performance of the real-coded Boltzmann selection genetic algorithm outperforms the other selection methods. A comparison between the designed low solidity vaned diffuser and original vaned diffuser shows that the diffuser pump with the redesigned low solidity vaned diffuser has the higher static pressure recovery and improved total hydrodynamic performance. In addition, the smaller outlet diameter of designed vaned diffuser tends to a more compact size of diffuser pump compared to the original diffuser pump. The obtained results also demonstrate the real-coded Boltzmann selection genetic algorithm is a promising optimization algorithm for centrifugal pumps design.

  9. The Table of Specifications: A Tool for Instructional Design and Development.

    ERIC Educational Resources Information Center

    Dills, Charles R.

    1998-01-01

    Tables of specifications provide graphic representations of objectives and segments of instruction or test questions. Examines tables of specifications (use, significance of empty cells, traceability) and their application in instructional design, highlighting constructivism and microworlds, structural communication, computer mediated…

  10. A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Chae, Han Gil

    Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the

  11. The TOMS V9 Algorithm for OMPS Nadir Mapper Total Ozone: An Enhanced Design That Ensures Data Continuity

    NASA Astrophysics Data System (ADS)

    Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.

    2015-12-01

    The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.

  12. Liquid Engine Design: Effect of Chamber Dimensions on Specific Impulse

    NASA Technical Reports Server (NTRS)

    Hoggard, Lindsay; Leahy, Joe

    2009-01-01

    Which assumption of combustion chemistry - frozen or equilibrium - should be used in the prediction of liquid rocket engine performance calculations? Can a correlation be developed for this? A literature search using the LaSSe tool, an online repository of old rocket data and reports, was completed. Test results of NTO/Aerozine-50 and Lox/LH2 subscale and full-scale injector and combustion chamber test results were found and studied for this task. NASA code, Chemical Equilibrium with Applications (CEA) was used to predict engine performance using both chemistry assumptions, defined here. Frozen- composition remains frozen during expansion through the nozzle. Equilibrium- instantaneous chemical equilibrium during nozzle expansion. Chamber parameters were varied to understand what dimensions drive chamber C* and Isp. Contraction Ratio is the ratio of the nozzle throat area to the area of the chamber. L is the length of the chamber. Characteristic chamber length, L*, is the length that the chamber would be if it were a straight tube and had no converging nozzle. Goal: Develop a qualitative and quantitative correlation for performance parameters - Specific Impulse (Isp) and Characteristic Velocity (C*) - as a function of one or more chamber dimensions - Contraction Ratio (CR), Chamber Length (L ) and/or Characteristic Chamber Length (L*). Determine if chamber dimensions can be correlated to frozen or equilibrium chemistry.

  13. Psychosocial Risks Generated By Assets Specific Design Software

    NASA Astrophysics Data System (ADS)

    Remus, Furtună; Angela, Domnariu; Petru, Lazăr

    2015-07-01

    The human activity concerning an occupation is resultant from the interaction between the psycho-biological, socio-cultural and organizational-occupational factors. Tehnological development, automation and computerization that are to be found in all the branches of activity, the level of speed in which things develop, as well as reaching their complexity, require less and less physical aptitudes and more cognitive qualifications. The person included in the work process is bound in most of the cases to come in line with the organizational-occupational situations that are specific to the demands of the job. The role of the programmer is essencial in the process of execution of ordered softwares, thus the truly brilliant ideas can only come from well-rested minds, concentrated on their tasks. The actual requirements of the jobs, besides the high number of benefits and opportunities, also create a series of psycho-social risks, which can increase the level of stress during work activity, especially for those who work under pressure.

  14. Specification and preliminary design of an array processor

    NASA Technical Reports Server (NTRS)

    Slotnick, D. L.; Graham, M. L.

    1975-01-01

    The design of a computer suited to the class of problems typified by the general circulation of the atmosphere was investigated. A fundamental goal was that the resulting machine should have roughly 100 times the computing capability of an IBM 360/95 computer. A second requirement was that the machine should be programmable in a higher level language similar to FORTRAN. Moreover, the new machine would have to be compatible with the IBM 360/95 since the IBM machine would continue to be used for pre- and post-processing. A third constraint was that the cost of the new machine was to be significantly less than that of other extant machines of similar computing capability, such as the ILLIAC IV and CDC STAR. A final constraint was that it should be feasible to fabricate a complete system and put it in operation by early 1978. Although these objectives were generally met, considerable work remains to be done on the routing system.

  15. Overall plant design specification Modular High Temperature Gas-cooled Reactor. Revision 9

    SciTech Connect

    1990-05-01

    Revision 9 of the ``Overall Plant Design Specification Modular High Temperature Gas-Cooled Reactor,`` DOE-HTGR-86004 (OPDS) has been completed and is hereby distributed for use by the HTGR Program team members. This document, Revision 9 of the ``Overall Plant Design Specification`` (OPDS) reflects those changes in the MHTGR design requirements and configuration resulting form approved Design Change Proposals DCP BNI-003 and DCP BNI-004, involving the Nuclear Island Cooling and Spent Fuel Cooling Systems respectively.

  16. Optimization of Spherical Roller Bearing Design Using Artificial Bee Colony Algorithm and Grid Search Method

    NASA Astrophysics Data System (ADS)

    Tiwari, Rajiv; Waghole, Vikas

    2015-07-01

    Bearing standards impose restrictions on the internal geometry of spherical roller bearings. Geometrical and strength constraints conditions have been formulated for the optimization of bearing design. The long fatigue life is one of the most important criteria in the optimum design of bearing. The life is directly proportional to the dynamic capacity; hence, the objective function has been chosen as the maximization of dynamic capacity. The effect of speed and static loads acting on the bearing are also taken into account. Design variables for the bearing include five geometrical parameters: the roller diameter, the roller length, the bearing pitch diameter, the number of rollers, and the contact angle. There are a few design constraint parameters which are also included in the optimization, the bounds of which are obtained by initial runs of the optimization. The optimization program is made to run for different values of these design constraint parameters and a range of the parameters is obtained for which the objective function has a higher value. The artificial bee colony algorithm (ABCA) has been used to solve the constrained optimized problem and the optimum design is compared with the one obtained from the grid search method (GSM), both operating independently. Both the ABCA and the GSM have been finally combined together to reach the global optimum point. A constraint violation study has also been carried out to give priority to the constraint having greater possibility of violations. Optimized bearing designs show a better performance parameter with those specified in bearing catalogs. The sensitivity analysis of bearing parameters has also been carried out to see the effect of manufacturing tolerance on the objective function.

  17. Algorithms in Learning, Teaching, and Instructional Design. Studies in Systematic Instruction and Training Technical Report 51201.

    ERIC Educational Resources Information Center

    Gerlach, Vernon S.; And Others

    An algorithm is defined here as an unambiguous procedure which will always produce the correct result when applied to any problem of a given class of problems. This paper gives an extended discussion of the definition of an algorithm. It also explores in detail the elements of an algorithm, the representation of algorithms in standard prose, flow…

  18. Structure Design of the 3-D Braided Composite Based on a Hybrid Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ke

    Three-dimensional braided composite has the better designable characteristic. Whereas wide application of hollow-rectangular-section three-dimensional braided composite in engineering, optimization design of the three-dimensional braided composite made by 4-step method were introduced. Firstly, the stiffness and damping characteristic analysis of the composite is presented. Then, the mathematical models for structure design of the three-dimensional braided composite were established. The objective functions are based on the specific damping capacity and stiffness of the composite. The design variables are the braiding parameters of the composites and sectional geometrical size of the composite. The optimization problem is solved by using ant colony optimization (ACO), contenting the determinate restriction. The results of numeral examples show that the better damping and stiffness characteristic could be obtained. The method proposed here is useful for the structure design of the kind of member and its engineering application.

  19. GADIS: Algorithm for designing sequences to achieve target secondary structure profiles of intrinsically disordered proteins.

    PubMed

    Harmon, Tyler S; Crabtree, Michael D; Shammas, Sarah L; Posey, Ammon E; Clarke, Jane; Pappu, Rohit V

    2016-09-01

    Many intrinsically disordered proteins (IDPs) participate in coupled folding and binding reactions and form alpha helical structures in their bound complexes. Alanine, glycine, or proline scanning mutagenesis approaches are often used to dissect the contributions of intrinsic helicities to coupled folding and binding. These experiments can yield confounding results because the mutagenesis strategy changes the amino acid compositions of IDPs. Therefore, an important next step in mutagenesis-based approaches to mechanistic studies of coupled folding and binding is the design of sequences that satisfy three major constraints. These are (i) achieving a target intrinsic alpha helicity profile; (ii) fixing the positions of residues corresponding to the binding interface; and (iii) maintaining the native amino acid composition. Here, we report the development of a G: enetic A: lgorithm for D: esign of I: ntrinsic secondary S: tructure (GADIS) for designing sequences that satisfy the specified constraints. We describe the algorithm and present results to demonstrate the applicability of GADIS by designing sequence variants of the intrinsically disordered PUMA system that undergoes coupled folding and binding to Mcl-1. Our sequence designs span a range of intrinsic helicity profiles. The predicted variations in sequence-encoded mean helicities are tested against experimental measurements. PMID:27503953

  20. Multidisciplinary design optimization of vehicle instrument panel based on multi-objective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Wu, Guangqiang

    2013-03-01

    Typical multidisciplinary design optimization(MDO) has gradually been proposed to balance performances of lightweight, noise, vibration and harshness(NVH) and safety for instrument panel(IP) structure in the automotive development. Nevertheless, plastic constitutive relation of Polypropylene(PP) under different strain rates, has not been taken into consideration in current reliability-based and collaborative IP MDO design. In this paper, based on tensile test under different strain rates, the constitutive relation of Polypropylene material is studied. Impact simulation tests for head and knee bolster are carried out to meet the regulation of FMVSS 201 and FMVSS 208, respectively. NVH analysis is performed to obtain mainly the natural frequencies and corresponding mode shapes, while the crashworthiness analysis is employed to examine the crash behavior of IP structure. With the consideration of lightweight, NVH, head and knee bolster impact performance, design of experiment(DOE), response surface model(RSM), and collaborative optimization(CO) are applied to realize the determined and reliability-based optimizations, respectively. Furthermore, based on multi-objective genetic algorithm(MOGA), the optimal Pareto sets are completed to solve the multi-objective optimization(MOO) problem. The proposed research ensures the smoothness of Pareto set, enhances the ability of engineers to make a comprehensive decision about multi-objectives and choose the optimal design, and improves the quality and efficiency of MDO.

  1. Earth Observatory Satellite system definition study. Report no. 5: System design and specifications. Part 1: Observatory system element specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The performance, design, and quality assurance requirements for the Earth Observatory Satellite (EOS) Observatory and Ground System program elements required to perform the Land Resources Management (LRM) A-type mission are presented. The requirements for the Observatory element with the exception of the instruments specifications are contained in the first part.

  2. Software design specification. Part 2: Orbital Flight Test (OFT) detailed design specification. Volume 3: Applications. Book 2: System management

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The functions performed by the systems management (SM) application software are described along with the design employed to accomplish these functions. The operational sequences (OPS) control segments and the cyclic processes they control are defined. The SM specialist function control (SPEC) segments and the display controlled 'on-demand' processes that are invoked by either an OPS or SPEC control segment as a direct result of an item entry to a display are included. Each processing element in the SM application is described including an input/output table and a structured control flow diagram. The flow through the module and other information pertinent to that process and its interfaces to other processes are included.

  3. Fuzzy logic control algorithms for MagneShock semiactive vehicle shock absorbers: design and experimental evaluations

    NASA Astrophysics Data System (ADS)

    Craft, Michael J.; Buckner, Gregory D.; Anderson, Richard D.

    2003-07-01

    Automotive ride quality and handling performance remain challenging design tradeoffs for modern, passive automobile suspension systems. Despite extensive published research outlining the benefits of active vehicle suspensions in addressing this tradeoff, the cost and complexity of these systems frequently prohibit commercial adoption. Semi-active suspensions can provide performance benefits over passive suspensions without the cost and complexity associated with fully active systems. This paper outlines the development and experimental evaluation of a fuzzy logic control algorithm for a commercial semi-active suspension component, Carrera's MagneShockTM shock absorber. The MagneShockTM utilizes an electromagnet to change the viscosity of magnetorheological (MR) fluid, which changes the damping characteristics of the shock. Damping for each shock is controlled by manipulating the coil current using real-time algorithms. The performance capabilities of fuzzy logic control (FLC) algorithms are demonstrated through experimental evaluations on a passenger vehicle. Results show reductions of 25% or more in sprung mass absorbed power (U.S. Army 6 Watt Absorbed Power Criterion) as compared to typical passive shock absorbers over urban terrains in both simulation and experimentation. Average sprung-mass RMS accelerations were also reduced by as much as 9%, but usually with an increase in total suspension travel over the passive systems. Additionally, a negligible decrease in RMS tire normal force was documented through computer simulations. And although the FLC absorbed power was comparable to that of the fixed-current MagneShockTM the FLC revealed reduced average RMS sprung-mass accelerations over the fixed-current MagneShocks by 2-9%. Possible means for improvement of this system include reducing the suspension spring stiffness and increasing the dynamic damping range of the MagneShockTM.

  4. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  5. SP-Designer: a user-friendly program for designing species-specific primer pairs from DNA sequence alignments.

    PubMed

    Villard, Pierre; Malausa, Thibaut

    2013-07-01

    SP-Designer is an open-source program providing a user-friendly tool for the design of specific PCR primer pairs from a DNA sequence alignment containing sequences from various taxa. SP-Designer selects PCR primer pairs for the amplification of DNA from a target species on the basis of several criteria: (i) primer specificity, as assessed by interspecific sequence polymorphism in the annealing regions, (ii) the biochemical characteristics of the primers and (iii) the intended PCR conditions. SP-Designer generates tables, detailing the primer pair and PCR characteristics, and a FASTA file locating the primer sequences in the original sequence alignment. SP-Designer is Windows-compatible and freely available from http://www2.sophia.inra.fr/urih/sophia_mart/sp_designer/info_sp_designer.php.

  6. 40 CFR 55.15 - Specific designation of corresponding onshore areas.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Specific designation of corresponding onshore areas. 55.15 Section 55.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) OUTER CONTINENTAL SHELF AIR REGULATIONS § 55.15 Specific designation...

  7. 78 FR 33863 - Relationship Between General Design Criteria and Technical Specification Operability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... this RIS in the Federal Register (77 FR 45282) on July 31, 2012. The agency received comments from two... COMMISSION Relationship Between General Design Criteria and Technical Specification Operability AGENCY... Relationship Between General Design Criteria and Technical Specification Operability.'' This RIS clarifies...

  8. GSP: A web-based platform for designing genome-specific primers in polyploids

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The sequences among subgenomes in a polyploid species have high similarity. This makes difficult to design genome-specific primers for sequence analysis. We present a web-based platform named GSP for designing genome-specific primers to distinguish subgenome sequences in the polyploid genome backgr...

  9. Optimum design of vortex generator elements using Kriging surrogate modelling and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Neelakantan, Rithwik; Balu, Raman; Saji, Abhinav

    Vortex Generators (VG's) are small angled plates located in a span wise fashion aft of the leading edge of an aircraft wing. They control airflow over the upper surface of the wing by creating vortices which energise the boundary layer. The parameters considered for the optimisation study of the VG's are its height, orientation angle and location along the chord in a low subsonic flow over a NACA0012 airfoil. The objective function to be maximised is the L/D ratio of the airfoil. The design data are generated using the commercially available ANSYS FLUENT software and are modelled using a Kriging based interpolator. This surrogate model is used along with a Generic Algorithm software to arrive at the optimum shape of the VG's. The results of this study will be confirmed with actual wind tunnel tests on scaled models.

  10. Design of two-dimensional photonic crystals with large absolute band gaps using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Linfang; Ye, Zhuo; He, Sailing

    2003-07-01

    A two-stage genetic algorithm (GA) with a floating mutation probability is developed to design a two-dimensional (2D) photonic crystal of a square lattice with the maximal absolute band gap. The unit cell is divided equally into many square pixels, and each filling pattern of pixels with two dielectric materials corresponds to a chromosome consisting of binary digits 0 and 1. As a numerical example, the two-stage GA gives a 2D GaAs structure with a relative width of the absolute band gap of about 19%. After further optimization, a new 2D GaAs photonic crystal is found with an absolute band gap much larger than those reported before.

  11. Designing Daily Patrol Routes for Policing Based on ANT Colony Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, H.; Cheng, T.; Wise, S.

    2015-07-01

    In this paper, we address the problem of planning police patrol routes to regularly cover street segments of high crime density (hotspots) with limited police forces. A good patrolling strategy is required to minimise the average time lag between two consecutive visits to hotspots, as well as coordinating multiple patrollers and imparting unpredictability in patrol routes. Previous studies have designed different police patrol strategies for routing police patrol, but these strategies have difficulty in generalising to real patrolling and meeting various requirements. In this research we develop a new police patrolling strategy based on Bayesian method and ant colony algorithm. In this strategy, virtual marker (pheromone) is laid to mark the visiting history of each crime hotspot, and patrollers continuously decide which hotspot to patrol next based on pheromone level and other variables. Simulation results using real data testifies the effective, scalable, unpredictable and extensible nature of this strategy.

  12. Signal design using nonlinear oscillators and evolutionary algorithms: Application to phase-locked loop disruption

    NASA Astrophysics Data System (ADS)

    Olson, C. C.; Nichols, J. M.; Michalowicz, J. V.; Bucholtz, F.

    2011-06-01

    This work describes an approach for efficiently shaping the response characteristics of a fixed dynamical system by forcing with a designed input. We obtain improved inputs by using an evolutionary algorithm to search a space of possible waveforms generated by a set of nonlinear, ordinary differential equations (ODEs). Good solutions are those that result in a desired system response subject to some input efficiency constraint, such as signal power. In particular, we seek to find inputs that best disrupt a phase-locked loop (PLL). Three sets of nonlinear ODEs are investigated and found to have different disruption capabilities against a model PLL. These differences are explored and implications for their use as input signal models are discussed. The PLL was chosen here as an archetypal example but the approach has broad applicability to any input/output system for which a desired input cannot be obtained analytically.

  13. Aerodynamic Design Exploration for Reusable Launch Vehicle Using Genetic Algorithm with Navier Stokes Solver

    NASA Astrophysics Data System (ADS)

    Tatsukawa, Tomoaki; Nonomura, Taku; Oyama, Akira; Fujii, Kozo

    In this study, aerodynamic design exploration for reusable launch vehicle (RLV) is conducted using genetic algorithm with Navier-Stokes solver to understand the aerodynamic characteristics for various body configurations and find design information such as tradeoff information among objectives. The multi-objective aerodynamic design optimization for minimizing zero-lift drag at supersonic condition, maximizing maximum lift-to-drag ratio (L/D) at subsonic condition, maximizing maximum L/D at supersonic condition, and maximizing volume of shape is conducted for bi-conical shape RLV based on computational fluid dynamics (CFD). The total number of evaluation in multi-objective optimization is 400, and it is necessary for evaluating one body configuration to conduct 8 CFD runs. In total, 3200 CFD runs are conducted. The analysis of Pareto-optimal solutions shows that there are various trade-off relations among objectives clearly, and the analysis of flow fields shows that the shape for the minimum drag configuration is almost the same as that of the shape for the maximum L/D configuration at supersonic condition. The shape for the maximum L/D at subsonic condition obtains additional lift at the kink compared with the minimum drag configuration. It leads to enhancement of L/D.

  14. Algorithms and theory for the design and programming of industrial control systems materialized with PLC's

    NASA Astrophysics Data System (ADS)

    Montoya Villena, Rafael

    According to its title, the general objective of the Thesis consists in developing a clear, simple and systematic methodology for programming type PLC devices. With this aim in mind, we will use the following elements: Codification of all variables types. This section is very important since it allows us working with little information. The necessary rules are given to codify all type of phrases produced in industrial processes. An algorithm that describes process evolution and that has been called process D.F. This is one of the most important contributions, since it will allow us, together with information codification, representing the process evolution in a graphic way and with any design theory used. Theory selection. Evidently, the use of some kind of design method is necessary to obtain logic equations. For this particular case, we will use binodal theory, an ideal theory for wired technologies, since it can obtain highly reduced schemas for relatively simple automatisms, which means a minimum number of components used. User program outline algorithm (D.F.P.). This is another necessary contribution and perhaps the most important one, since logic equations resulting from binodal theory are compatible with process evolution if wired technology is used, whether it is electric, electronic, pneumatic, etc. On the other hand, PLC devices performance characteristics force the program instructions order to validate or not the automatism, as we have proven in different articles and lectures at congresses both national and international. Therefore, we will codify any information concerning the automating process, graphically represent its temporal evolution and, applying binodal theory and D.F.P (previously adapted), succeed in making logic equations compatible with the process to be automated and the device in which they will be implemented (PLC in our case)

  15. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    SciTech Connect

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  16. A millimeter wave image fusion algorithm design and optimization based on CDF97 wavelet transform

    NASA Astrophysics Data System (ADS)

    Yu, Jian-cheng; Chen, Bo-yang; Xia, A.-lin; Liu, Xin-guang

    2011-08-01

    Millimeter wave imaging technology provides a new detection method for security, fast and safe. But the wave of the images is its own shortcomings, such as noise and low sensitivity. Systems used for security, since only the corresponding specific objects to retain the information, and other information missing, so the actual image is difficult to locate in the millimeter wave . Image fusion approach can be used to effectively solve this problem. People usually use visible and millimeter-wave image fusion. The use of visible image contains the visual information. The fused image can be more convenient site for the detection of concealed weapons and to provide accurate positioning. The integration of information from different detectors, and there are different between the two levels of signal to noise ratio and pixel resolution, so traditional pixel-level fusion methods often cannot satisfy the fusion. Many experts and scholars apply wavelet transform approach to deal with some remote sensing image fusion, and the performance has been greatly improved. Due to these wavelet transform algorithm with complexity and large amount of computation, many algorithms are still in research stage. In order to improve the fusion performance and gain the real-time image fusion, an Integer Wavelet Transform CDF97 based with regional energy enhancement fusion algorithm is proposed in this paper. First, this paper studies of choice of wavelet operator. The paper invites several characteristics to evaluate the performance of wavelet operator used in image fusion. Results show that CDF97 wavelet fusion performance is better than traditional wavelet wavelets such as db wavelet, the vanishing moment longer the better. CDF97 wavelet has good energy concentration characteristic. The low frequency region of the transformed image contains almost the whole image energy. The target in millimeter wave image often has the low-pass characteristics and with a higher energy compare to the ambient

  17. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    NASA Astrophysics Data System (ADS)

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sébastian, P.

    2010-06-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM® and Samcef® softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  18. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  19. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins

    PubMed Central

    Afanasyev, Vsevolod; Buldyrev, Sergey V.; Dunn, Michael J.; Robst, Jeremy; Preston, Mark; Bremner, Steve F.; Briggs, Dirk R.; Brown, Ruth; Adlard, Stacey; Peat, Helen J.

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge’s accurate performance and demonstrates how its design is a significant improvement on existing systems. PMID:25894763

  20. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    PubMed

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  1. Preliminary design of the Carrisa Plains solar central receiver power plant. Volume II. Plant specifications

    SciTech Connect

    Price, R. E.

    1983-12-31

    The specifications and design criteria for all plant systems and subsystems used in developing the preliminary design of Carrisa Plains 30-MWe Solar Plant are contained in this volume. The specifications have been organized according to plant systems and levels. The levels are arranged in tiers. Starting at the top tier and proceeding down, the specification levels are the plant, system, subsystem, components, and fabrication. A tab number, listed in the index, has been assigned each document to facilitate document location.

  2. General asymmetric neutral networks and structure design by genetic algorithms: A learning rule for temporal patterns

    SciTech Connect

    Bornholdt, S.; Graudenz, D.

    1993-07-01

    A learning algorithm based on genetic algorithms for asymmetric neural networks with an arbitrary structure is presented. It is suited for the learning of temporal patterns and leads to stable neural networks with feedback.

  3. SU-E-T-316: The Design of a Risk Index Method for 3D Patient Specific QA

    SciTech Connect

    Cho, W; Wu, H; Xing, L; Suh, T

    2014-06-01

    Purpose: To suggest a new guidance for the evaluation of 3D patient specific QA, a structure-specific risk-index (RI) method was designed and implemented. Methods: A new algorithm was designed to assign the score of Pass, Fail or Pass with Risk to all 3D voxels in each structure by improving a conventional Gamma Index (GI) algorithm, which implied the degree of the risk of under-dose to the treatment target or over-dose to the organ at risks (OAR). Structure-specific distance to agreement (DTOA), dose difference and minimum checkable dose were applied to the GI algorithm, and additional parameters such as dose gradient factor and dose limit of structures were used to the RI method. Maximum passing rate (PR) and minimum PR were designed and calculated for each structure with the RI method. 3D doses were acquired from a spine SBRT plan by simulating the shift of beam iso-center, and tested to show the feasibility of the suggested method. Results: When the iso-center was shifted by 1 mm, 2 mm, and 3 mm, the PR of conventional GI method between shifted and non-shifted 3D doses were 99.9%, 97.4%, and 89.7% for PTV, 99.8%, 84.8%, and 63.2% for spinal cord, and 100%, 99.5%, 91.7% for right lung. The minimum PRs from the RI method were 98.9%, 96.9%, and 89.5% for PTV, and 96.1%, 79.3%, 57.5% for spinal cord, and 92.5%, 92.0%, 84.4% for right lung, respectively. The maximum PRs from the RI method were equal or less than the PRs from the conventional GI evaluation. Conclusion: Designed 3D RI method showed more strict acceptance level than the conventional GI method, especially for OARs. The RI method is expected to give the degrees of risks in the delivered doses, as well as the degrees of agreements between calculated 3D doses and measured (or simulated) 3D doses.

  4. Design and synthesis of a potent peptide containing both specific and non-specific cell-adhesion motifs.

    PubMed

    Lai, Yuxiao; Xie, Cao; Zhang, Zheng; Lu, Weiyue; Ding, Jiandong

    2010-06-01

    This article reports a potent chemical to promote cell adhesion on a substrate by combination of both moieties for specific and non-specific adhesion. The cyclic (-RGDfK-) (R: arginine, G: glycine, D: aspartic acid, f: D-phenylalanine, K: lysine) is employed to trigger specific cell adhesion, and a linear tripeptide KKK is introduced to enhance early non-specific cell adhesion. A series of cyclic and linear peptides with different charges were synthesized and then functionalized with thiol end-group. All the peptides were immobilized on gold layers, which were later passivated by bovine serum albumin. The coverage of NIH/3T3 fibroblast cells on the substrate modified by the linker containing both cyclic (-RGDfK-) and linear KKK is, surprisingly, significantly better than the summation using one of them, which reveals the strong cooperativity of specific and non-specific cell adhesions. The resultant cell adhesion on the substrates modified by appropriate linkers was much better than on tissue-culture plates. The cooperativity principle and the design strategy of the combined linker might be helpful for fundamental research of cell-material or cell-extracellular matrix interactions, and for modification of new biomaterials in regenerative medicine and targeted drug delivery.

  5. High specificity of line-immunoassay based algorithms for recent HIV-1 infection independent of viral subtype and stage of disease

    PubMed Central

    2011-01-01

    Background Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. Methods Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. Results HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. Conclusions The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients. PMID:21943091

  6. Development of optical design algorithms on the base of the exact (all orders) geometrical aberration theory

    NASA Astrophysics Data System (ADS)

    Hristov, Boian A.

    2011-10-01

    The process of optical design today is both an art and a science mainly due to the lack of exact and suitable aberration theory. In this paper we propose an exact (without any approximations) analytical aberration theory. It describes exactly the relations between the on-axis image aberrations and on-axis object aberrations via so called relative parameters, real aperture incidence angles, real aperture slope angles, refraction indexes and object distance. The image field aberrations (distortion, astigmatism, tangential curvature, sagittal curvature and field curvature) are described in a mathematically exact way by means of relative parameters, real incidence angles and slope angles of the chief rays, refraction indexes, object distance and corresponding object aberrations. For the image tangential coma and image sagittal coma we propose differential formulae. To verify the correction of every single aberration we use the commercial program OSLO. The differences between our and OSLO results for each aberration (except for the tangential and sagittal coma) are less than 1x10-8 mm. In addition we propose some exact aberration's correction algorithms for a very distant object and variety of constructive design solutions which confirm the truth of the proposed theory.

  7. Using adaptive genetic algorithms in the design of morphological filters in textural image processing

    NASA Astrophysics Data System (ADS)

    Li, Wei; Haese-Coat, Veronique; Ronsin, Joseph

    1996-03-01

    An adaptive GA scheme is adopted for the optimal morphological filter design problem. The adaptive crossover and mutation rate which make the GA avoid premature and at the same time assure convergence of the program are successfully used in optimal morphological filter design procedure. In the string coding step, each string (chromosome) is composed of a structuring element coding chain concatenated with a filter sequence coding chain. In decoding step, each string is divided into 3 chains which then are decoded respectively into one structuring element with a size inferior to 5 by 5 and two concatenating morphological filter operators. The fitness function in GA is based on the mean-square-error (MSE) criterion. In string selection step, a stochastic tournament procedure is used to replace the simple roulette wheel program in order to accelerate the convergence. The final convergence of our algorithm is reached by a two step converging strategy. In presented applications of noise removal from texture images, it is found that with the optimized morphological filter sequences, the obtained MSE values are smaller than those using corresponding non-adaptive morphological filters, and the optimized shapes and orientations of structuring elements take approximately the same shapes and orientations as those of the image textons.

  8. Multi-objective optimal design of feedback controls for dynamical systems with hybrid simple cell mapping algorithm

    NASA Astrophysics Data System (ADS)

    Xiong, Fu-Rui; Qin, Zhi-Chang; Xue, Yang; Schütze, Oliver; Ding, Qian; Sun, Jian-Qiao

    2014-05-01

    This paper presents a study of multi-objective optimal design of full state feedback controls. The goal of the design is to minimize several conflicting performance objective functions at the same time. The simple cell mapping method with a hybrid algorithm is used to find the multi-objective optimal design solutions. The multi-objective optimal design comes in a set of gains representing various compromises of the control system. Examples of regulation and tracking controls are presented to validate the control design.

  9. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  10. Selection of pairings reaching evenly across the data (SPREAD): A simple algorithm to design maximally informative fully crossed mating experiments.

    PubMed

    Zimmerman, K; Levitis, D; Addicott, E; Pringle, A

    2016-02-01

    We present a novel algorithm for the design of crossing experiments. The algorithm identifies a set of individuals (a 'crossing-set') from a larger pool of potential crossing-sets by maximizing the diversity of traits of interest, for example, maximizing the range of genetic and geographic distances between individuals included in the crossing-set. To calculate diversity, we use the mean nearest neighbor distance of crosses plotted in trait space. We implement our algorithm on a real dataset of Neurospora crassa strains, using the genetic and geographic distances between potential crosses as a two-dimensional trait space. In simulated mating experiments, crossing-sets selected by our algorithm provide better estimates of underlying parameter values than randomly chosen crossing-sets.

  11. BWM*: A Novel, Provable, Ensemble-based Dynamic Programming Algorithm for Sparse Approximations of Computational Protein Design.

    PubMed

    Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R

    2016-06-01

    Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.

  12. Design of a high-sensitivity classifier based on a genetic algorithm: application to computer-aided diagnosis

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Helvie, Mark A.; Goodsitt, Mitchell M.

    1998-10-01

    A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection . With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of Genetic Algorithm for Innovative Device Designs in High-Efficiency III-V Nitride Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Zhu, Di; Schubert, Martin F.; Cho, Jaehee; Schubert, E. Fred; Crawford, Mary H.; Koleske, Daniel D.; Shim, Hyunwook; Sone, Cheolsoo

    2012-01-01

    Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III-V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.

  13. Genetic Algorithm for Innovative Device Designs in High-Efficiency III–V Nitride Light-Emitting Diodes

    SciTech Connect

    Zhu, Di; Schubert, Martin F.; Cho, Jaehee; Schubert, E. Fred; Crawford, Mary H.; Koleske, Daniel D.; Shim, Hyunwook; Sone, Cheolsoo

    2012-01-01

    Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III–V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.

  14. Experimental design for estimating unknown hydraulic conductivity in an aquifer using a genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2015-12-01

    We develop an experimental design algorithm to select locations for a network of observation wells that provide the maximum robust information about unknown hydraulic conductivity in a confined, anisotropic aquifer. Since the information that a design provides is dependent on an aquifer's hydraulic conductivity, a robust design is one that provides the maximum information in the worst-case scenario. The design can be formulated as a max-min optimization problem. The problem is generally non-convex, non-differentiable, and contains integer variables. We use a Genetic Algorithm (GA) to perform the combinatorial search. We employ proper orthogonal decomposition (POD) to reduce the dimension of the groundwater model, thereby reducing the computational burden posed by employing a GA. The GA algorithm exhaustively searches for the robust design across a set of hydraulic conductivities and finds an approximate design (called the High Frequency Observation Well Design) through a Monte Carlo-type search. The results from a small-scale 1-D test case validate the proposed methodology. We then apply the methodology to a realistically-scaled 2-D test case.

  15. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  16. HAL/SM system functional design specification. [systems analysis and design analysis of central processing units

    NASA Technical Reports Server (NTRS)

    Ross, C.; Williams, G. P. W., Jr.

    1975-01-01

    The functional design of a preprocessor, and subsystems is described. A structure chart and a data flow diagram are included for each subsystem. Also a group of intermodule interface definitions (one definition per module) is included immediately following the structure chart and data flow for a particular subsystem. Each of these intermodule interface definitions consists of the identification of the module, the function the module is to perform, the identification and definition of parameter interfaces to the module, and any design notes associated with the module. Also described are compilers and computer libraries.

  17. Automatic generation of conceptual database design tools from data model specifications

    SciTech Connect

    Hong, Shuguang.

    1989-01-01

    The problems faced in the design and implementation of database software systems based on object-oriented data models are similar to that of other software design, i.e., difficult, complex, yet redundant effort. Automatic generation of database software system has been proposed as a solution to the problems. In order to generate database software system for a variety of object-oriented data models, two critical issues: data model specification and software generation, must be addressed. SeaWeed is a software system that automatically generates conceptual database design tools from data model specifications. A meta model has been defined for the specification of a class of object-oriented data models. This meta model provides a set of primitive modeling constructs that can be used to express the semantics, or unique characteristics, of specific data models. Software reusability has been adopted for the software generation. The technique of design reuse is utilized to derive the requirement specification of the software to be generated from data model specifications. The mechanism of code reuse is used to produce the necessary reusable software components. This dissertation presents the research results of SeaWeed including the meta model, data model specification, a formal representation of design reuse and code reuse, and the software generation paradigm.

  18. Design of electrocardiography measurement system with an algorithm to remove noise

    NASA Astrophysics Data System (ADS)

    Kwon, Hyeokjun; Oh, Sechang; Kumar, Prashanth; Varadan, Vijay K.

    2011-04-01

    Electrocardiography (ECG) is an important diagnostic tool that can provide vital information about diseases that may not be detectable with other biological signals like, SpO2(Oxygen Saturation), pulse rate, respiration, and blood pressure. For this reason, EKG measurement is mandatory for accurate diagnosis. Recent development in information technology has facilitated remote monitoring systems which can check patient's current status. Moreover, remote monitoring systems can obviate the need for patients to go to hospitals periodically. Such representative wireless communication system is Zigbee sensor network because Zigbee sensor network provides low power consumption and multi-device connection. When we measure EKG signal, another important factor that we should consider is about unexpected signals mixed to EKG signal. The unexpected signals give a severe impact in distorting original EKG signal. There are three kinds of types in noise elements such as muscle noise, movement noise, and respiration noise. This paper describes the design method for EKG measurement system with Zigbee sensor network and proposes an algorithm to remove noises from measured ECG signal.

  19. Designing mixed metal halide ammines for ammonia storage using density functional theory and genetic algorithms.

    PubMed

    Jensen, Peter Bjerre; Lysgaard, Steen; Quaade, Ulrich J; Vegge, Tejs

    2014-09-28

    Metal halide ammines have great potential as a future, high-density energy carrier in vehicles. So far known materials, e.g. Mg(NH3)6Cl2 and Sr(NH3)8Cl2, are not suitable for automotive, fuel cell applications, because the release of ammonia is a multi-step reaction, requiring too much heat to be supplied, making the total efficiency lower. Here, we apply density functional theory (DFT) calculations to predict new mixed metal halide ammines with improved storage capacities and the ability to release the stored ammonia in one step, at temperatures suitable for system integration with polymer electrolyte membrane fuel cells (PEMFC). We use genetic algorithms (GAs) to search for materials containing up to three different metals (alkaline-earth, 3d and 4d) and two different halides (Cl, Br and I) - almost 27,000 combinations, and have identified novel mixtures, with significantly improved storage capacities. The size of the search space and the chosen fitness function make it possible to verify that the found candidates are the best possible candidates in the search space, proving that the GA implementation is ideal for this kind of computational materials design, requiring calculations on less than two percent of the candidates to identify the global optimum. PMID:25115581

  1. Design of an iterative auto-tuning algorithm for a fuzzy PID controller

    NASA Astrophysics Data System (ADS)

    Saeed, Bakhtiar I.; Mehrdadi, B.

    2012-05-01

    Since the first application of fuzzy logic in the field of control engineering, it has been extensively employed in controlling a wide range of applications. The human knowledge on controlling complex and non-linear processes can be incorporated into a controller in the form of linguistic terms. However, with the lack of analytical design study it is becoming more difficult to auto-tune controller parameters. Fuzzy logic controller has several parameters that can be adjusted, such as: membership functions, rule-base and scaling gains. Furthermore, it is not always easy to find the relation between the type of membership functions or rule-base and the controller performance. This study proposes a new systematic auto-tuning algorithm to fine tune fuzzy logic controller gains. A fuzzy PID controller is proposed and applied to several second order systems. The relationship between the closed-loop response and the controller parameters is analysed to devise an auto-tuning method. The results show that the proposed method is highly effective and produces zero overshoot with enhanced transient response. In addition, the robustness of the controller is investigated in the case of parameter changes and the results show a satisfactory performance.

  2. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    PubMed Central

    Mandal, Saptarshi

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830

  3. Combining Interactive Infrastructure Modeling and Evolutionary Algorithm Optimization for Sustainable Water Resources Design

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Zagona, E. A.

    2013-12-01

    Population growth and climate change, combined with difficulties in building new infrastructure, motivate portfolio-based solutions to ensuring sufficient water supply. Powerful simulation models with graphical user interfaces (GUI) are often used to evaluate infrastructure portfolios; these GUI based models require manual modification of the system parameters, such as reservoir operation rules, water transfer schemes, or system capacities. Multiobjective evolutionary algorithm (MOEA) based optimization can be employed to balance multiple objectives and automatically suggest designs for infrastructure systems, but MOEA based decision support typically uses a fixed problem formulation (i.e., a single set of objectives, decisions, and constraints). This presentation suggests a dynamic framework for linking GUI-based infrastructure models with MOEA search. The framework begins with an initial formulation which is solved using a MOEA. Then, stakeholders can interact with candidate solutions, viewing their properties in the GUI model. This is followed by changes in the formulation which represent users' evolving understanding of exigent system properties. Our case study is built using RiverWare, an object-oriented, data-centered model that facilitates the representation of a diverse array of water resources systems. Results suggest that assumptions within the initial MOEA search are violated after investigating tradeoffs and reveal how formulations should be modified to better capture stakeholders' preferences.

  4. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 5: Specification for EROS operations control center

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The functional, performance, and design requirements for the Operations Control Center (OCC) of the Earth Observatory Satellite (EOS) system are presented. The OCC controls the operations of the EOS satellite to acquire mission data consisting of: (1) thematic mapper data, (2) multispectral scanner data on EOS-A, or High Resolution Pointable Imager data on EOS-B, and (3) data collection system (DCS) data. The various inputs to the OCC are identified. The functional requirements of the OCC are defined. The specific systems and subsystems of the OCC are described and block diagrams are provided.

  5. Preliminary design specification for Department of Energy standardized spent nuclear fuel canisters. Volume 2: Rationale document

    SciTech Connect

    1998-08-19

    This document (Volume 2) is a companion document to a preliminary design specification for the design of canisters to be used during the handling, storage, transportation, and repository disposal of Department of Energy (DOE) spent nuclear fuel (SNF). This document contains no procurement information, such as the number of canisters to be fabricated, explicit timeframes for deliverables, etc. However, this rationale document does provide background information and design philosophy in order to help engineers better understand the established design criteria (contained in Volume 1 respectively) necessary to correctly design and fabricate these DOE SNF canisters.

  6. 12 CFR 1815.104 - Specific responsibilities of the designated Fund official.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Specific responsibilities of the designated... FUND, DEPARTMENT OF THE TREASURY ENVIRONMENTAL QUALITY § 1815.104 Specific responsibilities of the... decisionmaking processes to ensure that environmental factors are properly considered in all proposals...

  7. 36 CFR 907.5 - Specific responsibilities of designated Corporation official.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Specific responsibilities of... DEVELOPMENT CORPORATION ENVIRONMENTAL QUALITY § 907.5 Specific responsibilities of designated Corporation... Corporation's planning and decision-making processes to ensure that environmental factors are...

  8. Single Event Testing on Complex Devices: Test Like You Fly versus Test-Specific Design Structures

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth A.

    2014-01-01

    We present a framework for evaluating complex digital systems targeted for harsh radiation environments such as space. Focus is limited to analyzing the single event upset (SEU) susceptibility of designs implemented inside Field Programmable Gate Array (FPGA) devices. Tradeoffs are provided between application-specific versus test-specific test structures.

  9. Single Event Testing on Complex Devices: Test Like You Fly Versus Test-Specific Design Structures

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth

    2016-01-01

    We present a mechanism for evaluating complex digital systems targeted for harsh radiation environments such as space. Focus is limited to analyzing the single event upset (SEU) susceptibility of designs implemented inside Field Programmable Gate Array (FPGA) devices. Tradeoffs are provided between application-specific versus test-specific test structures.

  10. Efficient design method for cell allocation in hybrid CMOS/nanodevices using a cultural algorithm with chaotic behavior

    NASA Astrophysics Data System (ADS)

    Pan, Zhong-Liang; Chen, Ling; Zhang, Guang-Zhao

    2016-04-01

    The hybrid CMOS molecular (CMOL) circuit, which combines complementary metal-oxide-semiconductor (CMOS) components with nanoscale wires and switches, can exhibit significantly improved performance. In CMOL circuits, the nanodevices, which are called cells, should be placed appropriately and are connected by nanowires. The cells should be connected such that they follow the shortest path. This paper presents an efficient method of cell allocation in CMOL circuits with the hybrid CMOS/nanodevice structure; the method is based on a cultural algorithm with chaotic behavior. The optimal model of cell allocation is derived, and the coding of an individual representing a cell allocation is described. Then the cultural algorithm with chaotic behavior is designed to solve the optimal model. The cultural algorithm consists of a population space, a belief space, and a protocol that describes how knowledge is exchanged between the population and belief spaces. In this paper, the evolutionary processes of the population space employ a genetic algorithm in which three populations undergo parallel evolution. The evolutionary processes of the belief space use a chaotic ant colony algorithm. Extensive experiments on cell allocation in benchmark circuits showed that a low area usage can be obtained using the proposed method, and the computation time can be reduced greatly compared to that of a conventional genetic algorithm.

  11. Better Educational Website Interface Design: The Implications from Gender-Specific Preferences in Graduate Students

    ERIC Educational Resources Information Center

    Hsu, Yu-chang

    2006-01-01

    This study investigated graduate students gender-specific preferences for certain website interface design features, intending to generate useful information for instructors in choosing and for website designers in creating educational websites. The features investigated in this study included colour value, major navigation buttons placement, and…

  12. Improving Students' Conceptual Understanding of a Specific Content Learning: A Designed Teaching Sequence

    ERIC Educational Resources Information Center

    Ahmad, N. J.; Lah, Y. Che

    2012-01-01

    The efficacy of a teaching sequence designed for a specific content of learning of electrochemistry is described in this paper. The design of the teaching draws upon theoretical insights into perspectives on learning and empirical studies to improve the teaching of this topic. A case study involving two classes, the experimental and baseline…

  13. A domain-specific design architecture for composite material design and aircraft part redesign

    NASA Technical Reports Server (NTRS)

    Punch, W. F., III; Keller, K. J.; Bond, W.; Sticklen, J.

    1992-01-01

    Advanced composites have been targeted as a 'leapfrog' technology that would provide a unique global competitive position for U.S. industry. Composites are unique in the requirements for an integrated approach to designing, manufacturing, and marketing of products developed utilizing the new materials of construction. Numerous studies extending across the entire economic spectrum of the United States from aerospace to military to durable goods have identified composites as a 'key' technology. In general there have been two approaches to composite construction: build models of a given composite materials, then determine characteristics of the material via numerical simulation and empirical testing; and experience-directed construction of fabrication plans for building composites with given properties. The first route sets a goal to capture basic understanding of a device (the composite) by use of a rigorous mathematical model; the second attempts to capture the expertise about the process of fabricating a composite (to date) at a surface level typically expressed in a rule based system. From an AI perspective, these two research lines are attacking distinctly different problems, and both tracks have current limitations. The mathematical modeling approach has yielded a wealth of data but a large number of simplifying assumptions are needed to make numerical simulation tractable. Likewise, although surface level expertise about how to build a particular composite may yield important results, recent trends in the KBS area are towards augmenting surface level problem solving with deeper level knowledge. Many of the relative advantages of composites, e.g., the strength:weight ratio, is most prominent when the entire component is designed as a unitary piece. The bottleneck in undertaking such unitary design lies in the difficulty of the re-design task. Designing the fabrication protocols for a complex-shaped, thick section composite are currently very difficult. It is in

  14. Neural signal processing and closed-loop control algorithm design for an implanted neural recording and stimulation system.

    PubMed

    Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N

    2015-08-01

    A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed

  15. Neural signal processing and closed-loop control algorithm design for an implanted neural recording and stimulation system.

    PubMed

    Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N

    2015-08-01

    A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed

  16. Designing, Visualizing, and Discussing Algorithms within a CS 1 Studio Experience: An Empirical Study

    ERIC Educational Resources Information Center

    Hundhausen, Christopher D.; Brown, Jonathan L.

    2008-01-01

    Within the context of an introductory CS1 unit on algorithmic problem-solving, we are exploring the pedagogical value of a novel active learning activity--the "studio experience"--that actively engages learners with algorithm visualization technology. In a studio experience, student pairs are tasked with (a) developing a solution to an algorithm…

  17. Laser communication experiment. Volume 1: Design study report: Spacecraft transceiver. Part 3: LCE design specifications

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The requirements for the design, fabrication, performance, and testing of a 10.6 micron optical heterodyne receiver subsystem for use in a laser communication system are presented. The receiver subsystem, as a part of the laser communication experiment operates in the ATS 6 satellite and in a transportable ground station establishing two-way laser communications between the spacecraft and the transportable ground station. The conditions under which environmental tests are conducted are reported.

  18. Design of protein-interaction specificity affords selective bZIP-binding peptides

    PubMed Central

    Grigoryan, Gevorg; Reinke, Aaron W.; Keating, Amy E.

    2009-01-01

    Interaction specificity is a required feature of biological networks and a necessary characteristic of protein or small-molecule reagents and therapeutics. The ability to alter or inhibit protein interactions selectively would advance basic and applied molecular science. Assessing or modelling interaction specificity requires treating multiple competing complexes, which presents computational and experimental challenges. Here we present a computational framework for designing protein interaction specificity and use it to identify specific peptide partners for human bZIP transcription factors. Protein microarrays were used to characterize designed, synthetic ligands for all but one of 20 bZIP families. The bZIP proteins share strong sequence and structural similarities and thus are challenging targets to bind specifically. Yet many of the designs, including examples that bind the oncoproteins cJun, cFos and cMaf, were selective for their targets over all 19 other families. Collectively, the designs exhibit a wide range of novel interaction profiles, demonstrating that human bZIPs have only sparsely sampled the possible interaction space accessible to them. Our computational method provides a way to systematically analyze tradeoffs between stability and specificity and is suitable for use with many types of structure-scoring functions; thus it may prove broadly useful as a tool for protein design. PMID:19370028

  19. Design of a Four-Element, Hollow-Cube Corner Retroreflector for Satellites by use of a Genetic Algorithm.

    PubMed

    Minato, A; Sugimoto, N

    1998-01-20

    A four-element retroreflector was designed for satellite laser ranging and Earth-satellite-Earth laser long-path absorption measurement of the atmosphere. The retroreflector consists of four symmetrically located corner retroreflectors. Each retroreflector element has curved mirrors and tuned dihedral angles to correct velocity aberrations. A genetic algorithm was employed to optimize dihedral angles of each element and the directions of the four elements. The optimized four-element retroreflector has high reflectance with a reasonably broad angular coverage. It is also shown that the genetic algorithm is effective for optimizing optics with many parameters.

  20. Optimal Design of a 3-Leg 6-DOF Parallel Manipulator for a Specific Workspace

    NASA Astrophysics Data System (ADS)

    Fu, Jianxun; Gao, Feng

    2016-04-01

    Researchers seldom study optimum design of a six-degree-of-freedom(DOF) parallel manipulator with three legs based upon the given workspace. An optimal design method of a novel three-leg six-DOF parallel manipulator(TLPM) is presented. The mechanical structure of this robot is introduced, with this structure the kinematic constrain equations is decoupled. Analytical solutions of the forward kinematics are worked out, one configuration of this robot, including position and orientation of the end-effector are graphically displayed. Then, on the basis of several extreme positions of the kinematic performances, the task workspace is given. An algorithm of optimal designing is introduced to find the smallest dimensional parameters of the proposed robot. Examples illustrate the design results, and a design stability index is introduced, which ensures that the robot remains a safe distance from the boundary of sits actual workspace. Finally, one prototype of the robot is developed based on this method. This method can easily find appropriate kinematic parameters that can size a robot having the smallest workspace enclosing a predefined task workspace. It improves the design efficiency, ensures that the robot has a small mechanical size possesses a large given workspace volume, and meets the lightweight design requirements.

  1. Optimal design of a 3-leg 6-DOF parallel manipulator for a specific workspace

    NASA Astrophysics Data System (ADS)

    Fu, Jianxun; Gao, Feng

    2016-07-01

    Researchers seldom study optimum design of a six-degree-of-freedom(DOF) parallel manipulator with three legs based upon the given workspace. An optimal design method of a novel three-leg six-DOF parallel manipulator(TLPM) is presented. The mechanical structure of this robot is introduced, with this structure the kinematic constrain equations is decoupled. Analytical solutions of the forward kinematics are worked out, one configuration of this robot, including position and orientation of the end-effector are graphically displayed. Then, on the basis of several extreme positions of the kinematic performances, the task workspace is given. An algorithm of optimal designing is introduced to find the smallest dimensional parameters of the proposed robot. Examples illustrate the design results, and a design stability index is introduced, which ensures that the robot remains a safe distance from the boundary of sits actual workspace. Finally, one prototype of the robot is developed based on this method. This method can easily find appropriate kinematic parameters that can size a robot having the smallest workspace enclosing a predefined task workspace. It improves the design efficiency, ensures that the robot has a small mechanical size possesses a large given workspace volume, and meets the lightweight design requirements.

  2. Designing specific protein–protein interactions using computation, experimental library screening, or integrated methods

    PubMed Central

    Chen, T Scott; Keating, Amy E

    2012-01-01

    Given the importance of protein–protein interactions for nearly all biological processes, the design of protein affinity reagents for use in research, diagnosis or therapy is an important endeavor. Engineered proteins would ideally have high specificities for their intended targets, but achieving interaction specificity by design can be challenging. There are two major approaches to protein design or redesign. Most commonly, proteins and peptides are engineered using experimental library screening and/or in vitro evolution. An alternative approach involves using protein structure and computational modeling to rationally choose sequences predicted to have desirable properties. Computational design has successfully produced novel proteins with enhanced stability, desired interactions and enzymatic function. Here we review the strengths and limitations of experimental library screening and computational structure-based design, giving examples where these methods have been applied to designing protein interaction specificity. We highlight recent studies that demonstrate strategies for combining computational modeling with library screening. The computational methods provide focused libraries predicted to be enriched in sequences with the properties of interest. Such integrated approaches represent a promising way to increase the efficiency of protein design and to engineer complex functionality such as interaction specificity. PMID:22593041

  3. Optimal groundwater remediation design of pump and treat systems via a simulation-optimization approach and firefly algorithm

    NASA Astrophysics Data System (ADS)

    Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard

    2015-01-01

    In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.

  4. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  5. Robust Design of Advanced Thermoelectric Conversion Systems: Probabilistic Design Impacts on Specific Power and Power Flux Optimization

    SciTech Connect

    Hendricks, Terry J.; Karri, Naveen K.

    2008-04-30

    Advanced, direct thermal energy conversion technologies are receiving increased research attention in order to recover waste thermal energy in advanced vehicles and industrial processes. Advanced thermoelectric (TE) systems necessarily require integrated system-level analyses to establish accurate optimum system designs. Past system-level design and analysis has relied on well-defined deterministic input parameters even though many critically important environmental and system design parameters in the above mentioned applications are often randomly variable, sometimes according to complex relationships, rather than discrete, well-known deterministic variables. This work describes new research and development creating techniques and capabilities for probabilistic design and analysis of advanced TE power generation systems to quantify the effects of randomly uncertain design inputs in determining more robust optimum TE system designs and expected outputs. Selected case studies involving stochastic TE .material properties demonstrate key stochastic material impacts on power, optimum TE area, specific power, and power flux in the TE design optimization process. Magnitudes and directions of these design modifications are quantified for selected TE system design analysis cases

  6. Design of LED-based reflector-array module for specific illuminance distribution

    NASA Astrophysics Data System (ADS)

    Chen, Enguo; Yu, Feihong

    2013-02-01

    This paper presents an efficient and practical design method for a LED based reflector-array lighting module. Improving on previous designs, this method could offer higher design freedom to achieve specific illuminance distribution for actual lighting application and deal with the LED light intensity distribution while shortening the design time. The detailed design description of the lighting system is thoroughly investigated. To demonstrate the effectiveness of this method, an ultra-compact reflector-array module, which produces a rectangular illumination area with a large aspect ratio, is specially designed to meet the high-demanding requirements of industrial lighting application. Design results show that most LED emitting energy could be collected into the required lighting region while higher-brightness and better-uniformity are simultaneously available within the focus region. It is expected that this method will have great potential for other lighting applications.

  7. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  8. Design of a Broadband Electrical Impedance Matching Network for Piezoelectric Ultrasound Transducers Based on a Genetic Algorithm

    PubMed Central

    An, Jianfei; Song, Kezhu; Zhang, Shuangxi; Yang, Junfeng; Cao, Ping

    2014-01-01

    An improved method based on a genetic algorithm (GA) is developed to design a broadband electrical impedance matching network for piezoelectric ultrasound transducer. A key feature of the new method is that it can optimize both the topology of the matching network and perform optimization on the components. The main idea of this method is to find the optimal matching network in a set of candidate topologies. Some successful experiences of classical algorithms are absorbed to limit the size of the set of candidate topologies and greatly simplify the calculation process. Both binary-coded GA and real-coded GA are used for topology optimization and components optimization, respectively. Some calculation strategies, such as elitist strategy and clearing niche method, are adopted to make sure that the algorithm can converge to the global optimal result. Simulation and experimental results prove that matching networks with better performance might be achieved by this improved method. PMID:24743156

  9. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    PubMed

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  10. Design and optimization of pulsed Chemical Exchange Saturation Transfer MRI using a multiobjective genetic algorithm.

    PubMed

    Yoshimaru, Eriko S; Randtke, Edward A; Pagel, Mark D; Cárdenas-Rodríguez, Julio

    2016-02-01

    Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners. PMID:26778301

  11. Extended algorithm for the design of diffractive optical elements around the focal plane

    NASA Astrophysics Data System (ADS)

    Wu, Rong; Shu, Fang-Jie; Zhang, Wei; Zhang, Xiao-Bo; Li, Yong-Ping

    2007-08-01

    We present a multiplane algorithm for three-dimensional uniform illumination. The large-diameter diffractive optical element simulated by this algorithm homogeneously concentrates more than 86.5% of the incident energy into a 200 μm length of columnar space around the focal plane. The intensity profile in the whole space is nearly flattop, and the beam's quality measured by the root mean square is less than 20.6%. The algorithm is very useful if a great deal of tolerance is required for the installation error of the optical system or if it is used for some particular application, such as uniform illumination on an incline plane.

  12. Design and optimization of pulsed Chemical Exchange Saturation Transfer MRI using a multiobjective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yoshimaru, Eriko S.; Randtke, Edward A.; Pagel, Mark D.; Cárdenas-Rodríguez, Julio

    2016-02-01

    Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners.

  13. From ergonomics to design specifications: contributions to the design of a processing machine in a tire company.

    PubMed

    Moraes, A S P; Arezes, P M; Vasconcelos, R

    2012-01-01

    The development of ergonomics' recommendations, guidelines and standards are attempts to promote the integration of ergonomics into industrial contexts. Such developments result from several sources and professionals and represent the effort that has been done to develop healthier and safer work environments. However, the availability of large amount of data and documents regarding ergonomics does not guarantee their applicability. The main goal of this paper is to use a specific case to demonstrate how ergonomics criteria were developed in order to contribute to the design of workplaces. Based on the obtained results from research undertaken in a tire company, it was observed that the ergonomics criteria should be presented as design specifications in order to be used by engineers and designers. In conclusion, it is observed that the multiple constraint environment impeded the appliance of the ergonomics criteria. It was also observed that the knowledge on technical design and the acquaintance with ergonomic standards, the level of integration in the design team, and the ability to communicate with workers and other technical staff have paramount importance in integrating ergonomics criteria into the design process.

  14. Attributes of effective and efficient kindergarten reading intervention: an examination of instructional time and design specificity.

    PubMed

    Simmons, Deborah C; Kame'enui, Edward J; Harn, Beth; Coyne, Michael D; Stoolmiller, Mike; Santoro, Lana Edwards; Smith, Sylvia B; Beck, Carrie Thomas; Kaufman, Noah K

    2007-01-01

    A randomized experimental design with three levels of intervention was used to compare the effects of beginning reading interventions on early phonemic, decoding, and spelling outcomes of 96 kindergartners identified as at risk for reading difficulty. The three instructional interventions varied systematically along two dimensions--time and design of instruction specificity--and consisted of (a) 30 min with high design specificity (30/H), (b) 15 min with high design specificity plus 15 min of non-code-based instruction (15/H+15), and (c) a commercial comparison condition that reflected 30 min of moderate design specificity instruction (30/M). With the exception of the second 15 min of the 15/H+15 condition, all instruction focused on phonemic, alphabetic, and orthographic skills and strategies. Students were randomly assigned to one of the three interventions and received 108 thirty-minute sessions of small-group instruction as a supplement to their typical half-day kindergarten experience. Planned comparisons indicated findings of statistical and practical significance that varied according to measure and students' entry-level performance. The results are discussed in terms of the pedagogical precision needed to design and provide effective and efficient instruction for students who are most at risk.

  15. Specification and Design of Electrical Flight System Architectures with SysML

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L., Jr.; Jimenez, Alejandro

    2012-01-01

    Modern space flight systems are required to perform more complex functions than previous generations to support space missions. This demand is driving the trend to deploy more electronics to realize system functionality. The traditional approach for the specification, design, and deployment of electrical system architectures in space flight systems includes the use of informal definitions and descriptions that are often embedded within loosely coupled but highly interdependent design documents. Traditional methods become inefficient to cope with increasing system complexity, evolving requirements, and the ability to meet project budget and time constraints. Thus, there is a need for more rigorous methods to capture the relevant information about the electrical system architecture as the design evolves. In this work, we propose a model-centric approach to support the specification and design of electrical flight system architectures using the System Modeling Language (SysML). In our approach, we develop a domain specific language for specifying electrical system architectures, and we propose a design flow for the specification and design of electrical interfaces. Our approach is applied to a practical flight system.

  16. EEG/ERP adaptive noise canceller design with controlled search space (CSS) approach in cuckoo and other optimization algorithms.

    PubMed

    Ahirwal, M K; Kumar, Anil; Singh, G K

    2013-01-01

    This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.

  17. Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO

    PubMed Central

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  18. Design and Implementation of Broadcast Algorithms for Extreme-Scale Systems

    SciTech Connect

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua

    2011-01-01

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementation of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.

  19. Custom-Designed Molecular Scissors for Site-Specific Manipulation of the Plant and Mammalian Genomes

    NASA Astrophysics Data System (ADS)

    Kandavelou, Karthikeyan; Chandrasegaran, Srinivasan

    Zinc finger nucleases (ZFNs) are custom-designed molecular scissors, engineered to cut at specific DNA sequences. ZFNs combine the zinc finger proteins (ZFPs) with the nonspecific cleavage domain of the FokI restriction enzyme. The DNA-binding specificity of ZFNs can be easily altered experimentally. This easy manipulation of the ZFN recognition specificity enables one to deliver a targeted double-strand break (DSB) to a genome. The targeted DSB stimulates local gene targeting by several orders of magnitude at that specific cut site via homologous recombination (HR). Thus, ZFNs have become an important experimental tool to make site-specific and permanent alterations to genomes of not only plants and mammals but also of many other organisms. Engineering of custom ZFNs involves many steps. The first step is to identify a ZFN site at or near the chosen chromosomal target within the genome to which ZFNs will bind and cut. The second step is to design and/or select various ZFP combinations that will bind to the chosen target site with high specificity and affinity. The DNA coding sequence for the designed ZFPs are then assembled by polymerase chain reaction (PCR) using oligonucleotides. The third step is to fuse the ZFP constructs to the FokI cleavage domain. The ZFNs are then expressed as proteins by using the rabbit reticulocyte in vitro transcription/translation system and the protein products assayed for their DNA cleavage specificity.

  20. Custom-Designed Molecular Scissors for Site-Specific Manipulation of the Plant and Mammalian Genomes

    PubMed Central

    Kandavelou, Karthikeyan; Chandrasegaran, Srinivasan

    2010-01-01

    Summary Zinc finger nucleases (ZFNs) are custom-designed molecular scissors, engineered to cut at specific DNA sequences. ZFNs combine the zinc finger proteins (ZFPs) with the nonspecific cleavage domain of the FokI restriction enzyme. The DNA-binding specificity of ZFNs can be easily altered experimentally. This easy manipulation of the ZFN recognition specificity enables one to deliver a targeted double-strand break (DSB) to a genome. The targeted DSB stimulates local gene targeting by several orders of magnitude at that specific cut site via homologous recombination (HR). Thus, ZFNs have become an important experimental tool to make site-specific and permanent alterations to genomes of not only plants and mammals but also of many other organisms. Engineering of custom ZFNs involves many steps. The first step is to identify a ZFN site at or near the chosen chromosomal target within the genome to which ZFNs will bind and cut. The second step is to design and/or select various ZFP combinations that will bind to the chosen target site with high specificity and affinity. The DNA coding sequence for the designed ZFPs are then assembled by polymerase chain reaction (PCR) using oligonucleotides. The third step is to fuse the ZFP constructs to the FokI cleavage domain. The ZFNs are then expressed as proteins by using the rabbit reticulocyte in vitro transcription/translation system and the protein products assayed for their DNA cleavage specificity. PMID:19488728

  1. Optimal seismic design of reinforced concrete structures under time-history earthquake loads using an intelligent hybrid algorithm

    NASA Astrophysics Data System (ADS)

    Gharehbaghi, Sadjad; Khatibinia, Mohsen

    2015-03-01

    A reliable seismic-resistant design of structures is achieved in accordance with the seismic design codes by designing structures under seven or more pairs of earthquake records. Based on the recommendations of seismic design codes, the average time-history responses (ATHR) of structure is required. This paper focuses on the optimal seismic design of reinforced concrete (RC) structures against ten earthquake records using a hybrid of particle swarm optimization algorithm and an intelligent regression model (IRM). In order to reduce the computational time of optimization procedure due to the computational efforts of time-history analyses, IRM is proposed to accurately predict ATHR of structures. The proposed IRM consists of the combination of the subtractive algorithm (SA), K-means clustering approach and wavelet weighted least squares support vector machine (WWLS-SVM). To predict ATHR of structures, first, the input-output samples of structures are classified by SA and K-means clustering approach. Then, WWLS-SVM is trained with few samples and high accuracy for each cluster. 9- and 18-storey RC frames are designed optimally to illustrate the effectiveness and practicality of the proposed IRM. The numerical results demonstrate the efficiency and computational advantages of IRM for optimal design of structures subjected to time-history earthquake loads.

  2. Structural, kinetic, and thermodynamic studies of specificity designed HIV-1 protease

    SciTech Connect

    Alvizo, Oscar; Mittal, Seema; Mayo, Stephen L.; Schiffer, Celia A.

    2012-10-23

    HIV-1 protease recognizes and cleaves more than 12 different substrates leading to viral maturation. While these substrates share no conserved motif, they are specifically selected for and cleaved by protease during viral life cycle. Drug resistant mutations evolve within the protease that compromise inhibitor binding but allow the continued recognition of all these substrates. While the substrate envelope defines a general shape for substrate recognition, successfully predicting the determinants of substrate binding specificity would provide additional insights into the mechanism of altered molecular recognition in resistant proteases. We designed a variant of HIV protease with altered specificity using positive computational design methods and validated the design using X-ray crystallography and enzyme biochemistry. The engineered variant, Pr3 (A28S/D30F/G48R), was designed to preferentially bind to one out of three of HIV protease's natural substrates; RT-RH over p2-NC and CA-p2. In kinetic assays, RT-RH binding specificity for Pr3 increased threefold compared to the wild-type (WT), which was further confirmed by isothermal titration calorimetry. Crystal structures of WT protease and the designed variant in complex with RT-RH, CA-p2, and p2-NC were determined. Structural analysis of the designed complexes revealed that one of the engineered substitutions (G48R) potentially stabilized heterogeneous flap conformations, thereby facilitating alternate modes of substrate binding. Our results demonstrate that while substrate specificity could be engineered in HIV protease, the structural pliability of protease restricted the propagation of interactions as predicted. These results offer new insights into the plasticity and structural determinants of substrate binding specificity of the HIV-1 protease.

  3. Structural, kinetic, and thermodynamic studies of specificity designed HIV-1 protease.

    PubMed

    Alvizo, Oscar; Mittal, Seema; Mayo, Stephen L; Schiffer, Celia A

    2012-07-01

    HIV-1 protease recognizes and cleaves more than 12 different substrates leading to viral maturation. While these substrates share no conserved motif, they are specifically selected for and cleaved by protease during viral life cycle. Drug resistant mutations evolve within the protease that compromise inhibitor binding but allow the continued recognition of all these substrates. While the substrate envelope defines a general shape for substrate recognition, successfully predicting the determinants of substrate binding specificity would provide additional insights into the mechanism of altered molecular recognition in resistant proteases. We designed a variant of HIV protease with altered specificity using positive computational design methods and validated the design using X-ray crystallography and enzyme biochemistry. The engineered variant, Pr3 (A28S/D30F/G48R), was designed to preferentially bind to one out of three of HIV protease's natural substrates; RT-RH over p2-NC and CA-p2. In kinetic assays, RT-RH binding specificity for Pr3 increased threefold compared to the wild-type (WT), which was further confirmed by isothermal titration calorimetry. Crystal structures of WT protease and the designed variant in complex with RT-RH, CA-p2, and p2-NC were determined. Structural analysis of the designed complexes revealed that one of the engineered substitutions (G48R) potentially stabilized heterogeneous flap conformations, thereby facilitating alternate modes of substrate binding. Our results demonstrate that while substrate specificity could be engineered in HIV protease, the structural pliability of protease restricted the propagation of interactions as predicted. These results offer new insights into the plasticity and structural determinants of substrate binding specificity of the HIV-1 protease.

  4. Computational design of a red fluorophore ligase for site-specific protein labeling in living cells

    DOE PAGES

    Liu, Daniel S.; Nivon, Lucas G.; Richter, Florian; Goldman, Peter J.; Deerinck, Thomas J.; Yao, Jennifer Z.; Richardson, Douglas; Phipps, William S.; Ye, Anne Z.; Ellisman, Mark H.; et al

    2014-10-13

    In this study, chemical fluorophores offer tremendous size and photophysical advantages over fluorescent proteins but are much more challenging to target to specific cellular proteins. Here, we used Rosetta-based computation to design a fluorophore ligase that accepts the red dye resorufin, starting from Escherichia coli lipoic acid ligase. X-ray crystallography showed that the design closely matched the experimental structure. Resorufin ligase catalyzed the site-specific and covalent attachment of resorufin to various cellular proteins genetically fused to a 13-aa recognition peptide in multiple mammalian cell lines and in primary cultured neurons. We used resorufin ligase to perform superresolution imaging of themore » intermediate filament protein vimentin by stimulated emission depletion and electron microscopies. This work illustrates the power of Rosetta for major redesign of enzyme specificity and introduces a tool for minimally invasive, highly specific imaging of cellular proteins by both conventional and superresolution microscopies.« less

  5. Computational design of a red fluorophore ligase for site-specific protein labeling in living cells

    SciTech Connect

    Liu, Daniel S.; Nivon, Lucas G.; Richter, Florian; Goldman, Peter J.; Deerinck, Thomas J.; Yao, Jennifer Z.; Richardson, Douglas; Phipps, William S.; Ye, Anne Z.; Ellisman, Mark H.; Drennan, Catherine L.; Baker, David; Ting, Alice Y.

    2014-10-13

    In this study, chemical fluorophores offer tremendous size and photophysical advantages over fluorescent proteins but are much more challenging to target to specific cellular proteins. Here, we used Rosetta-based computation to design a fluorophore ligase that accepts the red dye resorufin, starting from Escherichia coli lipoic acid ligase. X-ray crystallography showed that the design closely matched the experimental structure. Resorufin ligase catalyzed the site-specific and covalent attachment of resorufin to various cellular proteins genetically fused to a 13-aa recognition peptide in multiple mammalian cell lines and in primary cultured neurons. We used resorufin ligase to perform superresolution imaging of the intermediate filament protein vimentin by stimulated emission depletion and electron microscopies. This work illustrates the power of Rosetta for major redesign of enzyme specificity and introduces a tool for minimally invasive, highly specific imaging of cellular proteins by both conventional and superresolution microscopies.

  6. Computational design of a red fluorophore ligase for site-specific protein labeling in living cells

    PubMed Central

    Liu, Daniel S.; Nivón, Lucas G.; Richter, Florian; Goldman, Peter J.; Deerinck, Thomas J.; Yao, Jennifer Z.; Richardson, Douglas; Phipps, William S.; Ye, Anne Z.; Ellisman, Mark H.; Drennan, Catherine L.; Baker, David; Ting, Alice Y.

    2014-01-01

    Chemical fluorophores offer tremendous size and photophysical advantages over fluorescent proteins but are much more challenging to target to specific cellular proteins. Here, we used Rosetta-based computation to design a fluorophore ligase that accepts the red dye resorufin, starting from Escherichia coli lipoic acid ligase. X-ray crystallography showed that the design closely matched the experimental structure. Resorufin ligase catalyzed the site-specific and covalent attachment of resorufin to various cellular proteins genetically fused to a 13-aa recognition peptide in multiple mammalian cell lines and in primary cultured neurons. We used resorufin ligase to perform superresolution imaging of the intermediate filament protein vimentin by stimulated emission depletion and electron microscopies. This work illustrates the power of Rosetta for major redesign of enzyme specificity and introduces a tool for minimally invasive, highly specific imaging of cellular proteins by both conventional and superresolution microscopies. PMID:25313043

  7. Using Space Weather Variability in Evaluating the Radiation Environment Design Specifications for NASA's Constellation Program

    NASA Technical Reports Server (NTRS)

    Coffey, Victoria N.; Blackwell, William C.; Minow, Joseph I.; Bruce, Margaret B.; Howard, James W.

    2007-01-01

    NASA's Constellation program, initiated to fulfill the Vision for Space Exploration, will create a new generation of vehicles for servicing low Earth orbit, the Moon, and beyond. Space radiation specifications for space system hardware are necessarily conservative to assure system robustness for a wide range of space environments. Spectral models of solar particle events and trapped radiation belt environments are used to develop the design requirements for estimating total ionizing radiation dose, displacement damage, and single event effects for Constellation hardware. We first describe the rationale using the spectra chosen to establish the total dose and single event design environmental specifications for Constellation systems. We then compare variability of the space environment to the spectral design models to evaluate their applicability as conservative design environments and potential vulnerabilities to extreme space weather events

  8. Method for Predicting the Energy Characteristics of Li-Ion Cells Designed for High Specific Energy

    NASA Technical Reports Server (NTRS)

    Bennett, William, R.

    2012-01-01

    Novel electrode materials with increased specific capacity and voltage performance are critical to the NASA goals for developing Li-ion batteries with increased specific energy and energy density. Although performance metrics of the individual electrodes are critically important, a fundamental understanding of the interactions of electrodes in a full cell is essential to achieving the desired performance, and for establishing meaningful goals for electrode performance in the first place. This paper presents design considerations for matching positive and negative electrodes in a viable design. Methods for predicting cell-level performance, based on laboratory data for individual electrodes, are presented and discussed.

  9. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction. PMID:26660897

  10. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction.

  11. Mod-5A wind turbine generator program design report. Volume 4: Drawings and specifications, book 3

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The design, development and analysis of the 7.3 MW MOD-5A wind turbine generator is documented. This volume contains the drawings and specifications developed for the final design. This volume is divided into 5 books of which this is the third, containing drawings 47A380074 through 47A380126. A full breakdown parts listing is provided as well as a where used list.

  12. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  13. Design of Optimal Treatments for Neuromusculoskeletal Disorders using Patient-Specific Multibody Dynamic Models

    PubMed Central

    Fregly, Benjamin J.

    2011-01-01

    Disorders of the human neuromusculoskeletal system such as osteoarthritis, stroke, cerebral palsy, and paraplegia significantly affect mobility and result in a decreased quality of life. Surgical and rehabilitation treatment planning for these disorders is based primarily on static anatomic measurements and dynamic functional measurements filtered through clinical experience. While this subjective treatment planning approach works well in many cases, it does not predict accurate functional outcome in many others. This paper presents a vision for how patient-specific multibody dynamic models can serve as the foundation for an objective treatment planning approach that identifies optimal treatments and treatment parameters on an individual patient basis. First, a computational paradigm is presented for constructing patient-specific multibody dynamic models. This paradigm involves a combination of patient-specific skeletal models, muscle-tendon models, neural control models, and articular contact models, with the complexity of the complete model being dictated by the requirements of the clinical problem being addressed. Next, three clinical applications are presented to illustrate how such models could be used in the treatment design process. One application involves the design of patient-specific gait modification strategies for knee osteoarthritis rehabilitation, a second involves the selection of optimal patient-specific surgical parameters for a particular knee osteoarthritis surgery, and the third involves the design of patient-specific muscle stimulation patterns for stroke rehabilitation. The paper concludes by discussing important challenges that need to be overcome to turn this vision into reality. PMID:21785529

  14. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  15. Inrush Current Simulation of Power Transformer using Machine Parameters Estimated by Design Procedure of Winding Structure and Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Tokunaga, Yoshitaka

    This paper presents estimation techniques of machine parameters for power transformer using design procedure of transformer and genetic algorithm with real coding. Especially, it is very difficult to obtain machine parameters for transformers in customers' facilities. Using estimation techniques, machine parameters could be calculated from the only nameplate data of these transformers. Subsequently, EMTP-ATP simulation of the inrush current was carried out using machine parameters estimated by techniques developed in this study and simulation results were reproduced measured waveforms.

  16. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  17. Springback compensation algorithm for tool design in creep age forming of large aluminum alloy plate

    NASA Astrophysics Data System (ADS)

    Xu, Xiaolong; Zhan, Lihua; Huang, Minghui

    2013-12-01

    The creep unified constitutive equations, which was built based on the age forming mechanism of aluminum alloy, was integrated with the commercial finite element analysis software MSC.MARC via the user defined subroutine, CREEP, and the creep age forming process simulations for7055 aluminum alloy plate parts were conducted. Then the springback of the workpiece after forming was calculated by ATOS Professional Software. Based on the combination between simulation results and calculation of springback by ATOS for the formed plate, a new weighted springback compensation algorithm for tool surface modification was developed. The compensate effects between the new algorithm and other overall compensation algorithms on the tool surface are compared. The results show that, the maximal forming error of the workpiece was reduced to below 0.2mm after 5 times compensations with the new weighted algorithm, while error rebound phenomenon occurred and the maximal forming error cannot be reduced to 0.3mm even after 6 times compensations with fixed or variable compensation coefficient, which are based on the overall compensation algorithm.

  18. Mod-5A Wind Turbine Generator Program Design Report. Volume 4: Drawings and Specifications, Book 1

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The design, development and analysis of the 7.3 MW MOD-5A wind turbine generator is documented. Volume 4 contains the drawings and specifications that were developed in preparation for building the MOD-5A wind turbine generator. This is the first of five books of volume four. It contains structural design criteria, generator step-up transformer specs, specs for design, fabrication and testing of the system, specs for the ground control enclosure, systems specs, slip ring specs, and control system specs.

  19. FPGA design and implementation of a fast pixel purity index algorithm for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Valencia, David; Plaza, Antonio; Vega-Rodríguez, Miguel A.; Pérez, Rosa M.

    2005-11-01

    Hyperspectral imagery is a class of image data which is used in many scientific areas, most notably, medical imaging and remote sensing. It is characterized by a wealth of spatial and spectral information. Over the last years, many algorithms have been developed with the purpose of finding "spectral endmembers," which are assumed to be pure signatures in remotely sensed hyperspectral data sets. Such pure signatures can then be used to estimate the abundance or concentration of materials in mixed pixels, thus allowing sub-pixel analysis which is crucial in many remote sensing applications due to current sensor optics and configuration. One of the most popular endmember extraction algorithms has been the pixel purity index (PPI), available from Kodak's Research Systems ENVI software package. This algorithm is very time consuming, a fact that has generally prevented its exploitation in valid response times in a wide range of applications, including environmental monitoring, military applications or hazard and threat assessment/tracking (including wildland fire detection, oil spill mapping and chemical and biological standoff detection). Field programmable gate arrays (FPGAs) are hardware components with millions of gates. Their reprogrammability and high computational power makes them particularly attractive in remote sensing applications which require a response in near real-time. In this paper, we present an FPGA design for implementation of PPI algorithm which takes advantage of a recently developed fast PPI (FPPI) algorithm that relies on software-based optimization. The proposed FPGA design represents our first step toward the development of a new reconfigurable system for fast, onboard analysis of remotely sensed hyperspectral imagery.

  20. Development of a Computer-Aided-Design-Based Geometry and Mesh Movement Algorithm for Three-Dimensional Aerodynamic Shape Optimization

    NASA Astrophysics Data System (ADS)

    Truong, Anh Hoang

    This thesis focuses on the development of a Computer-Aided-Design (CAD)-based geometry parameterization method and a corresponding surface mesh movement algorithm suitable for three-dimensional aerodynamic shape optimization. The geometry parameterization method includes a geometry control tool to aid in the construction and manipulation of a CAD geometry through a vendor-neutral application interface, CAPRI. It automates the tedious part of the construction phase involving data entry and provides intuitive and effective design variables that allow for both the flexibility and the precision required to control the movement of the geometry. The surface mesh movement algorithm, on the other hand, transforms an initial structured surface mesh to fit the new geometry using a discrete representation of the new CAD surface provided by CAPRI. Using a unique mapping procedure, the algorithm not only preserves the characteristics of the original surface mesh, but also guarantees that the new mesh points are on the CAD geometry. The new surface mesh is then smoothed in the parametric space before it is transformed back into three-dimensional space. The procedure is efficient in that all the processing is done in the parametric space, incurring minimal computational cost. The geometry parameterization and mesh movement tools are integrated into a three-dimensional shape optimization framework, with a linear-elasticity volume-mesh movement algorithm, a Newton-Krylov flow solver for the Euler equations, and a gradient-based optimizer. The validity and accuracy of the CAD-based optimization algorithm are demonstrated through a number of verification and optimization cases.