Science.gov

Sample records for algorithm specifically designed

  1. Specific PCR product primer design using memetic algorithm.

    PubMed

    Yang, Cheng-Hong; Cheng, Yu-Huei; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2009-01-01

    To provide feasible primer sets for performing a polymerase chain reaction (PCR) experiment, many primer design methods have been proposed. However, the majority of these methods require a relatively long time to obtain an optimal solution since large quantities of template DNA need to be analyzed. Furthermore, the designed primer sets usually do not provide a specific PCR product size. In recent years, evolutionary computation has been applied to PCR primer design and yielded promising results. In this article, a memetic algorithm (MA) is proposed to solve primer design problems associated with providing a specific product size for PCR experiments. The MA is compared with a genetic algorithm (GA) using an accuracy formula to estimate the quality of the primer design and test the running time. Overall, 50 accession nucleotide sequences were sampled for the comparison of the accuracy of the GA and MA for primer design. Five hundred runs of the GA and MA primer design were performed with PCR product lengths of 150-300 bps and 500-800 bps, and two different methods of calculating T(m) for each accession nucleotide sequence were tested. A comparison of the accuracy results for the GA and MA primer design showed that the MA primer design yielded better results than the GA primer design. The results further indicate that the proposed method finds optimal or near-optimal primer sets and effective PCR products in a dry dock experiment. Related materials are available online at http://bio.kuas.edu.tw/ma-pd/.

  2. Design specification for the whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.

    1974-01-01

    The necessary requirements and guidelines for the construction of a computer program of the whole-body algorithm are presented. The minimum subsystem models required to effectively simulate the total body response to stresses of interest are (1) cardiovascular (exercise/LBNP/tilt); (2) respiratory (Grodin's model); (3) thermoregulatory (Stolwijk's model); and (4) long-term circulatory fluid and electrolyte (Guyton's model). The whole-body algorithm must be capable of simulating response to stresses from CO2 inhalation, hypoxia, thermal environmental exercise (sitting and supine), LBNP, and tilt (changing body angles in gravity).

  3. A proteomics search algorithm specifically designed for high-resolution tandem mass spectra.

    PubMed

    Wenger, Craig D; Coon, Joshua J

    2013-03-01

    The acquisition of high-resolution tandem mass spectra (MS/MS) is becoming more prevalent in proteomics, but most researchers employ peptide identification algorithms that were designed prior to this development. Here, we demonstrate new software, Morpheus, designed specifically for high-mass accuracy data, based on a simple score that is little more than the number of matching products. For a diverse collection of data sets from a variety of organisms (E. coli, yeast, human) acquired on a variety of instruments (quadrupole-time-of-flight, ion trap-orbitrap, and quadrupole-orbitrap) in different laboratories, Morpheus gives more spectrum, peptide, and protein identifications at a 1% false discovery rate (FDR) than Mascot, Open Mass Spectrometry Search Algorithm (OMSSA), and Sequest. Additionally, Morpheus is 1.5 to 4.6 times faster, depending on the data set, than the next fastest algorithm, OMSSA. Morpheus was developed in C# .NET and is available free and open source under a permissive license.

  4. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  5. [siRNAs with high specificity to the target: a systematic design by CRM algorithm].

    PubMed

    Alsheddi, T; Vasin, L; Meduri, R; Randhawa, M; Glazko, G; Baranova, A

    2008-01-01

    'Off-target' silencing effect hinders the development of siRNA-based therapeutic and research applications. Common solution to this problem is an employment of the BLAST that may miss significant alignments or an exhaustive Smith-Waterman algorithm that is very time-consuming. We have developed a Comprehensive Redundancy Minimizer (CRM) approach for mapping all unique sequences ("targets") 9-to-15 nt in size within large sets of sequences (e.g. transcriptomes). CRM outputs a list of potential siRNA candidates for every transcript of the particular species. These candidates could be further analyzed by traditional "set-of-rules" types of siRNA designing tools. For human, 91% of transcripts are covered by candidate siRNAs with kernel targets of N = 15. We tested our approach on the collection of previously described experimentally assessed siRNAs and found that the correlation between efficacy and presence in CRM-approved set is significant (r = 0.215, p-value = 0.0001). An interactive database that contains a precompiled set of all human siRNA candidates with minimized redundancy is available at http://129.174.194.243. Application of the CRM-based filtering minimizes potential "off-target" silencing effects and could improve routine siRNA applications.

  6. A synthetic Earth Gravity Model Designed Specifically for Testing Regional Gravimetric Geoid Determination Algorithms

    NASA Astrophysics Data System (ADS)

    Baran, I.; Kuhn, M.; Claessens, S. J.; Featherstone, W. E.; Holmes, S. A.; Vaníček, P.

    2006-04-01

    A synthetic [simulated] Earth gravity model (SEGM) of the geoid, gravity and topography has been constructed over Australia specifically for validating regional gravimetric geoid determination theories, techniques and computer software. This regional high-resolution (1-arc-min by 1-arc-min) Australian SEGM (AusSEGM) is a combined source and effect model. The long-wavelength effect part (up to and including spherical harmonic degree and order 360) is taken from an assumed errorless EGM96 global geopotential model. Using forward modelling via numerical Newtonian integration, the short-wavelength source part is computed from a high-resolution (3-arc-sec by 3-arc-sec) synthetic digital elevation model (SDEM), which is a fractal surface based on the GLOBE v1 DEM. All topographic masses are modelled with a constant mass-density of 2,670 kg/m3. Based on these input data, gravity values on the synthetic topography (on a grid and at arbitrarily distributed discrete points) and consistent geoidal heights at regular 1-arc-min geographical grid nodes have been computed. The precision of the synthetic gravity and geoid data (after a first iteration) is estimated to be better than 30 μ Gal and 3 mm, respectively, which reduces to 1 μ Gal and 1 mm after a second iteration. The second iteration accounts for the changes in the geoid due to the superposed synthetic topographic mass distribution. The first iteration of AusSEGM is compared with Australian gravity and GPS-levelling data to verify that it gives a realistic representation of the Earth’s gravity field. As a by-product of this comparison, AusSEGM gives further evidence of the north south-trending error in the Australian Height Datum. The freely available AusSEGM-derived gravity and SDEM data, included as Electronic Supplementary Material (ESM) with this paper, can be used to compute a geoid model that, if correct, will agree to in 3 mm with the AusSEGM geoidal heights, thus offering independent verification of theories

  7. Molecular beacon sequence design algorithm.

    PubMed

    Monroe, W Todd; Haselton, Frederick R

    2003-01-01

    A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.

  8. Specification and Design Methodologies for High-Speed Fault-Tolerant Array Algorithms and Structures for VLSI.

    DTIC Science & Technology

    1987-06-01

    A182 772 SPECIFICATION AND DESIGN METHODOLOGIES FOR NIGH-SPEED 11 FAULT-TOLERANT ARRA.. CU) CALIFORNIA UNIY LOS ANGELES DEPT OF COMPUTER SCIENCE M D ...ERCEGOVAC ET AL. JUN 0? UNLASSIFIED N611-03--K-S49 F/ 91 ML Ji 1 2. ~ iiii -i ’IfIhIN I_______ IIIII .l n. ’ 3 ’ 3 .3 .5 *. .. w w, - .. .J’. ~ d ...STRUCTURES FOR VLSI Office of Naval Research Contract No. N00014-83-K-0493 Principal Investigator Milo D . Ercegovac ELECTE Co-Principal Ivestigator S AUG 0

  9. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  10. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  11. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Globus, Al; Linden, Derek S.; Lohn, Jason D.

    2006-01-01

    Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to

  12. EGNAS: an exhaustive DNA sequence design algorithm

    PubMed Central

    2012-01-01

    Background The molecular recognition based on the complementary base pairing of deoxyribonucleic acid (DNA) is the fundamental principle in the fields of genetics, DNA nanotechnology and DNA computing. We present an exhaustive DNA sequence design algorithm that allows to generate sets containing a maximum number of sequences with defined properties. EGNAS (Exhaustive Generation of Nucleic Acid Sequences) offers the possibility of controlling both interstrand and intrastrand properties. The guanine-cytosine content can be adjusted. Sequences can be forced to start and end with guanine or cytosine. This option reduces the risk of “fraying” of DNA strands. It is possible to limit cross hybridizations of a defined length, and to adjust the uniqueness of sequences. Self-complementarity and hairpin structures of certain length can be avoided. Sequences and subsequences can optionally be forbidden. Furthermore, sequences can be designed to have minimum interactions with predefined strands and neighboring sequences. Results The algorithm is realized in a C++ program. TAG sequences can be generated and combined with primers for single-base extension reactions, which were described for multiplexed genotyping of single nucleotide polymorphisms. Thereby, possible foldback through intrastrand interaction of TAG-primer pairs can be limited. The design of sequences for specific attachment of molecular constructs to DNA origami is presented. Conclusions We developed a new software tool called EGNAS for the design of unique nucleic acid sequences. The presented exhaustive algorithm allows to generate greater sets of sequences than with previous software and equal constraints. EGNAS is freely available for noncommercial use at http://www.chm.tu-dresden.de/pc6/EGNAS. PMID:22716030

  13. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  14. Fashion sketch design by interactive genetic algorithms

    NASA Astrophysics Data System (ADS)

    Mok, P. Y.; Wang, X. X.; Xu, J.; Kwok, Y. L.

    2012-11-01

    Computer aided design is vitally important for the modern industry, particularly for the creative industry. Fashion industry faced intensive challenges to shorten the product development process. In this paper, a methodology is proposed for sketch design based on interactive genetic algorithms. The sketch design system consists of a sketch design model, a database and a multi-stage sketch design engine. First, a sketch design model is developed based on the knowledge of fashion design to describe fashion product characteristics by using parameters. Second, a database is built based on the proposed sketch design model to define general style elements. Third, a multi-stage sketch design engine is used to construct the design. Moreover, an interactive genetic algorithm (IGA) is used to accelerate the sketch design process. The experimental results have demonstrated that the proposed method is effective in helping laypersons achieve satisfied fashion design sketches.

  15. Statistical Methods in Algorithm Design and Analysis.

    ERIC Educational Resources Information Center

    Weide, Bruce W.

    The use of statistical methods in the design and analysis of discrete algorithms is explored. The introductory chapter contains a literature survey and background material on probability theory. In Chapter 2, probabilistic approximation algorithms are discussed with the goal of exposing and correcting some oversights in previous work. Chapter 3…

  16. Advanced CHP Control Algorithms: Scope Specification

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  17. A computerized compensator design algorithm with launch vehicle applications

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Mcdaniel, W. L., Jr.

    1976-01-01

    This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.

  18. Reflight certification software design specifications

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The PDSS/IMC Software Design Specification for the Payload Development Support System (PDSS)/Image Motion Compensator (IMC) is contained. The PDSS/IMC is to be used for checkout and verification of the IMC flight hardware and software by NASA/MSFC.

  19. Hardware Algorithm Implementation for Mission Specific Processing

    DTIC Science & Technology

    2008-03-01

    developed and fabricated before it can get into the hands of the customer. The design to market can take up to 1-2 years depending on the technology and the...positioning systems, radar systems, and different communication devices. These devices, being mobile or not, are required for them to operate in the...field and communicate with Command and Control. For this reason it is essential that these libraries be built and perfected. With any circuit design

  20. Algorithm for backrub motions in protein design

    PubMed Central

    Georgiev, Ivelin; Keedy, Daniel; Richardson, Jane S.; Richardson, David C.; Donald, Bruce R.

    2008-01-01

    Motivation: The Backrub is a small but kinematically efficient side-chain-coupled local backbone motion frequently observed in atomic-resolution crystal structures of proteins. A backrub shifts the Cα–Cβ orientation of a given side-chain by rigid-body dipeptide rotation plus smaller individual rotations of the two peptides, with virtually no change in the rest of the protein. Backrubs can therefore provide a biophysically realistic model of local backbone flexibility for structure-based protein design. Previously, however, backrub motions were applied via manual interactive model-building, so their incorporation into a protein design algorithm (a simultaneous search over mutation and backbone/side-chain conformation space) was infeasible. Results: We present a combinatorial search algorithm for protein design that incorporates an automated procedure for local backbone flexibility via backrub motions. We further derive a dead-end elimination (DEE)-based criterion for pruning candidate rotamers that, in contrast to previous DEE algorithms, is provably accurate with backrub motions. Our backrub-based algorithm successfully predicts alternate side-chain conformations from ≤0.9 Å resolution structures, confirming the suitability of the automated backrub procedure. Finally, the application of our algorithm to redesign two different proteins is shown to identify a large number of lower-energy conformations and mutation sequences that would have been ignored by a rigid-backbone model. Availability: Contact authors for source code. Contact: brd+ismb08@cs.duke.edu PMID:18586714

  1. Instrument design and optimization using genetic algorithms

    SciTech Connect

    Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-10-15

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.

  2. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy.

  3. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  4. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  5. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  6. Predicting Resistance Mutations Using Protein Design Algorithms

    SciTech Connect

    Frey, K.; Georgiev, I; Donald, B; Anderson, A

    2010-01-01

    Drug resistance resulting from mutations to the target is an unfortunate common phenomenon that limits the lifetime of many of the most successful drugs. In contrast to the investigation of mutations after clinical exposure, it would be powerful to be able to incorporate strategies early in the development process to predict and overcome the effects of possible resistance mutations. Here we present a unique prospective application of an ensemble-based protein design algorithm, K*, to predict potential resistance mutations in dihydrofolate reductase from Staphylococcus aureus using positive design to maintain catalytic function and negative design to interfere with binding of a lead inhibitor. Enzyme inhibition assays show that three of the four highly-ranked predicted mutants are active yet display lower affinity (18-, 9-, and 13-fold) for the inhibitor. A crystal structure of the top-ranked mutant enzyme validates the predicted conformations of the mutated residues and the structural basis of the loss of potency. The use of protein design algorithms to predict resistance mutations could be incorporated in a lead design strategy against any target that is susceptible to mutational resistance.

  7. ERSYS-SPP access method subsystem design specification

    NASA Technical Reports Server (NTRS)

    Weise, R. C. (Principal Investigator)

    1980-01-01

    The STARAN special purpose processor (SPP) is a machine allowing the same operation to be performed on up to 512 different data elements simultaneously. In the ERSYS system, it is to be attached to a 4341 plug compatible machine (PCM) to do certain existing algorithms and, at a later date, to perform other to be specified algorithms. That part of the interface between the 4341 PCM and the SPP located in the 4341 PCM is known as the SPP access method (SPPAM). Access to the SPPAM will be obtained by use of the NQUEUE and DQUEUE commands. The subsystem design specification is to incorporate all applicable design considerations from the ERSYS system design specification and the Level B requirements documents relating to the SPPAM. It is intended as a basis for the preliminary design review and will expand into the subsystem detailed design specification.

  8. Salt bridges: geometrically specific, designable interactions.

    PubMed

    Donald, Jason E; Kulp, Daniel W; DeGrado, William F

    2011-03-01

    Salt bridges occur frequently in proteins, providing conformational specificity and contributing to molecular recognition and catalysis. We present a comprehensive analysis of these interactions in protein structures by surveying a large database of protein structures. Salt bridges between Asp or Glu and His, Arg, or Lys display extremely well-defined geometric preferences. Several previously observed preferences are confirmed, and others that were previously unrecognized are discovered. Salt bridges are explored for their preferences for different separations in sequence and in space, geometric preferences within proteins and at protein-protein interfaces, co-operativity in networked salt bridges, inclusion within metal-binding sites, preference for acidic electrons, apparent conformational side chain entropy reduction on formation, and degree of burial. Salt bridges occur far more frequently between residues at close than distant sequence separations, but, at close distances, there remain strong preferences for salt bridges at specific separations. Specific types of complex salt bridges, involving three or more members, are also discovered. As we observe a strong relationship between the propensity to form a salt bridge and the placement of salt-bridging residues in protein sequences, we discuss the role that salt bridges might play in kinetically influencing protein folding and thermodynamically stabilizing the native conformation. We also develop a quantitative method to select appropriate crystal structure resolution and B-factor cutoffs. Detailed knowledge of these geometric and sequence dependences should aid de novo design and prediction algorithms.

  9. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  10. Optimization of warfarin dose by population-specific pharmacogenomic algorithm.

    PubMed

    Pavani, A; Naushad, S M; Rupasree, Y; Kumar, T R; Malempati, A R; Pinjala, R K; Mishra, R C; Kutala, V K

    2012-08-01

    To optimize the warfarin dose, a population-specific pharmacogenomic algorithm was developed using multiple linear regression model with vitamin K intake and cytochrome P450 IIC polypeptide9 (CYP2C9(*)2 and (*)3), vitamin K epoxide reductase complex 1 (VKORC1(*)3, (*)4, D36Y and -1639 G>A) polymorphism profile of subjects who attained therapeutic international normalized ratio as predictors. New algorithm was validated by correlating with Wadelius, International Warfarin Pharmacogenetics Consortium and Gage algorithms; and with the therapeutic dose (r=0.64, P<0.0001). New algorithm was more accurate (Overall: 0.89 vs 0.51, warfarin resistant: 0.96 vs 0.77 and warfarin sensitive: 0.80 vs 0.24), more sensitive (0.87 vs 0.52) and specific (0.93 vs 0.50) compared with clinical data. It has significantly reduced the rate of overestimation (0.06 vs 0.50) and underestimation (0.13 vs 0.48). To conclude, this population-specific algorithm has greater clinical utility in optimizing the warfarin dose, thereby decreasing the adverse effects of suboptimal dose.

  11. An effective hybrid cuckoo search and genetic algorithm for constrained engineering design optimization

    NASA Astrophysics Data System (ADS)

    Kanagaraj, G.; Ponnambalam, S. G.; Jawahar, N.; Mukund Nilakantan, J.

    2014-10-01

    This article presents an effective hybrid cuckoo search and genetic algorithm (HCSGA) for solving engineering design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables. The proposed algorithm, HCSGA, is first applied to 13 standard benchmark constrained optimization functions and subsequently used to solve three well-known design problems reported in the literature. The numerical results obtained by HCSGA show competitive performance with respect to recent algorithms for constrained design optimization problems.

  12. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  13. UWB Tracking System Design with TDOA Algorithm

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan

    2006-01-01

    This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).

  14. VitAL: Viterbi Algorithm for de novo Peptide Design

    PubMed Central

    Unal, E. Besray; Gursoy, Attila; Erman, Burak

    2010-01-01

    Background Drug design against proteins to cure various diseases has been studied for several years. Numerous design techniques were discovered for small organic molecules for specific protein targets. The specificity, toxicity and selectivity of small molecules are hard problems to solve. The use of peptide drugs enables a partial solution to the toxicity problem. There has been a wide interest in peptide design, but the design techniques of a specific and selective peptide inhibitor against a protein target have not yet been established. Methodology/Principal Findings A novel de novo peptide design approach is developed to block activities of disease related protein targets. No prior training, based on known peptides, is necessary. The method sequentially generates the peptide by docking its residues pair by pair along a chosen path on a protein. The binding site on the protein is determined via the coarse grained Gaussian Network Model. A binding path is determined. The best fitting peptide is constructed by generating all possible peptide pairs at each point along the path and determining the binding energies between these pairs and the specific location on the protein using AutoDock. The Markov based partition function for all possible choices of the peptides along the path is generated by a matrix multiplication scheme. The best fitting peptide for the given surface is obtained by a Hidden Markov model using Viterbi decoding. The suitability of the conformations of the peptides that result upon binding on the surface are included in the algorithm by considering the intrinsic Ramachandran potentials. Conclusions/Significance The model is tested on known protein-peptide inhibitor complexes. The present algorithm predicts peptides that have better binding energies than those of the existing ones. Finally, a heptapeptide is designed for a protein that has excellent binding affinity according to AutoDock results. PMID:20532195

  15. A genetic algorithm for solving supply chain network design model

    NASA Astrophysics Data System (ADS)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  16. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  17. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  18. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  19. Optimum detailed design of reinforced concrete frames using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Govindaraj, V.; Ramasamy, J. V.

    2007-06-01

    This article presents the application of the genetic algorithm to the optimum detailed design of reinforced concrete frames based on Indian Standard specifications. The objective function is the total cost of the frame which includes the cost of concrete, formwork and reinforcing steel for individual members of the frame. In order for the optimum design to be directly constructible without any further modifications, aspects such as available standard reinforcement bar diameters, spacing requirements of reinforcing bars, modular sizes of members, architectural requirements on member sizes and other practical requirements in addition to relevant codal provisions are incorporated into the optimum design model. The produced optimum design satisfies the strength, serviceability, ductility, durability and other constraints related to good design and detailing practice. The detailing of reinforcements in the beam members is carried out as a sub-level optimization problem. This strategy helps to reduce the size of the optimization problem and saves computational time. The proposed method is demonstrated through several example problems and the optimum results obtained are compared with those in the available literature. It is concluded that the proposed optimum design model can be adopted in design offices as it yields rational, reliable, economical, time-saving and practical designs.

  20. GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.

  1. Engineered waste-package-system design specification

    SciTech Connect

    Not Available

    1983-05-01

    This report documents the waste package performance requirements and geologic and waste form data bases used in developing the conceptual designs for waste packages for salt, tuff, and basalt geologies. The data base reflects the latest geotechnical information on the geologic media of interest. The parameters or characteristics specified primarily cover spent fuel, defense high-level waste, and commercial high-level waste forms. The specification documents the direction taken during the conceptual design activity. A separate design specification will be developed prior to the start of the preliminary design activity.

  2. Thalmann Algorithm Decompression Table Generation Software Design Document

    DTIC Science & Technology

    2010-09-01

    Decompression Table Generation Software Design Document Navy Experimental Diving Unit Author...TITLE (Include Security Classification) (U) THALMANN ALGORITHM DECOMPRESSION TABLE GENERATION SOFTWARE DESIGN DOCUMENT 12. PERSONAL AUTHOR(S...1 2. Decompression Table Generator (TBLP7R

  3. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  4. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  5. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    NASA Astrophysics Data System (ADS)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  6. Formalisms for user interface specification and design

    NASA Technical Reports Server (NTRS)

    Auernheimer, Brent J.

    1989-01-01

    The application of formal methods to the specification and design of human-computer interfaces is described. A broad outline of human-computer interface problems, a description of the field of cognitive engineering and two relevant research results, the appropriateness of formal specification techniques, and potential NASA application areas are described.

  7. Domain specific software design for decision aiding

    NASA Technical Reports Server (NTRS)

    Keller, Kirby; Stanley, Kevin

    1992-01-01

    McDonnell Aircraft Company (MCAIR) is involved in many large multi-discipline design and development efforts of tactical aircraft. These involve a number of design disciplines that must be coordinated to produce an integrated design and a successful product. Our interpretation of a domain specific software design (DSSD) is that of a representation or framework that is specialized to support a limited problem domain. A DSSD is an abstract software design that is shaped by the problem characteristics. This parallels the theme of object-oriented analysis and design of letting the problem model directly drive the design. The DSSD concept extends the notion of software reusability to include representations or frameworks. It supports the entire software life cycle and specifically leads to improved prototyping capability, supports system integration, and promotes reuse of software designs and supporting frameworks. The example presented in this paper is the task network architecture or design which was developed for the MCAIR Pilot's Associate program. The task network concept supported both module development and system integration within the domain of operator decision aiding. It is presented as an instance where a software design exhibited many of the attributes associated with DSSD concept.

  8. Genetic Algorithms as a Tool for Phased Array Radar Design

    DTIC Science & Technology

    2002-06-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS Approved for public release; distribution is unlimited. GENETIC ALGORITHMS AS A...REPORT DATE June 2002 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE: Genetic Algorithms as a Tool for Phased Array Radar...creative ways to design multi-function phased array radars. This thesis proposes that Genetic Algorithms, computer programs that mimic natural selection

  9. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  10. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.

    PubMed

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei

    2016-09-01

    Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches.

  11. AbDesign: An algorithm for combinatorial backbone design guided by natural conformations and sequences.

    PubMed

    Lapidoth, Gideon D; Baran, Dror; Pszolla, Gabriele M; Norn, Christoffer; Alon, Assaf; Tyka, Michael D; Fleishman, Sarel J

    2015-08-01

    Computational design of protein function has made substantial progress, generating new enzymes, binders, inhibitors, and nanomaterials not previously seen in nature. However, the ability to design new protein backbones for function--essential to exert control over all polypeptide degrees of freedom--remains a critical challenge. Most previous attempts to design new backbones computed the mainchain from scratch. Here, instead, we describe a combinatorial backbone and sequence optimization algorithm called AbDesign, which leverages the large number of sequences and experimentally determined molecular structures of antibodies to construct new antibody models, dock them against target surfaces and optimize their sequence and backbone conformation for high stability and binding affinity. We used the algorithm to produce antibody designs that target the same molecular surfaces as nine natural, high-affinity antibodies; in five cases interface sequence identity is above 30%, and in four of those the backbone conformation at the core of the antibody binding surface is within 1 Å root-mean square deviation from the natural antibodies. Designs recapitulate polar interaction networks observed in natural complexes, and amino acid sidechain rigidity at the designed binding surface, which is likely important for affinity and specificity, is high compared to previous design studies. In designed anti-lysozyme antibodies, complementarity-determining regions (CDRs) at the periphery of the interface, such as L1 and H2, show greater backbone conformation diversity than the CDRs at the core of the interface, and increase the binding surface area compared to the natural antibody, potentially enhancing affinity and specificity.

  12. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  13. An algorithm for optimal structural design with frequency constraints

    NASA Technical Reports Server (NTRS)

    Kiusalaas, J.; Shaw, R. C. J.

    1978-01-01

    The paper presents a finite element method for minimum weight design of structures with lower-bound constraints on the natural frequencies, and upper and lower bounds on the design variables. The design algorithm is essentially an iterative solution of the Kuhn-Tucker optimality criterion. The three most important features of the algorithm are: (1) a small number of design iterations are needed to reach optimal or near-optimal design, (2) structural elements with a wide variety of size-stiffness may be used, the only significant restriction being the exclusion of curved beam and shell elements, and (3) the algorithm will work for multiple as well as single frequency constraints. The design procedure is illustrated with three simple problems.

  14. AbDesign: an algorithm for combinatorial backbone design guided by natural conformations and sequences

    PubMed Central

    Lapidoth, Gideon D.; Baran, Dror; Pszolla, Gabriele M.; Norn, Christoffer; Alon, Assaf; Tyka, Michael D.; Fleishman, Sarel J.

    2016-01-01

    Computational design of protein function has made substantial progress, generating new enzymes, binders, inhibitors, and nanomaterials not previously seen in nature. However, the ability to design new protein backbones for function – essential to exert control over all polypeptide degrees of freedom – remains a critical challenge. Most previous attempts to design new backbones computed the mainchain from scratch. Here, instead, we describe a combinatorial backbone and sequence optimization algorithm called AbDesign, which leverages the large number of sequences and experimentally determined molecular structures of antibodies to construct new antibody models, dock them against target surfaces and optimize their sequence and backbone conformation for high stability and binding affinity. We used the algorithm to produce antibody designs that target the same molecular surfaces as nine natural, high-affinity antibodies; in six the backbone conformation at the core of the antibody binding surface is similar to the natural antibody targets, and in several cases sequence and sidechain conformations recapitulate those seen in the natural antibodies. In the case of an anti-lysozyme antibody, designed antibody CDRs at the periphery of the interface, such as L1 and H2, show a greater backbone conformation diversity than the CDRs at the core of the interface, and increase the binding surface area compared to the natural antibody, which could enhance affinity and specificity. PMID:25670500

  15. Computational Aspects of Realization & Design Algorithms in Linear Systems Theory.

    NASA Astrophysics Data System (ADS)

    Tsui, Chia-Chi

    Realization and design problems are two major problems in linear time-invariant systems control theory and have been solved theoretically. However, little is understood about their numerical properties. Due to the large scale of the problem and the finite precision of computer computation, it is very important and is the purpose of this study to investigate the computational reliability and efficiency of the algorithms for these two problems. In this dissertation, a reliable algorithm to achieve canonical form realization via Hankel matrix is developed. A comparative study of three general realization algorithms, for both numerical reliability and efficiency, shows that the proposed algorithm (via Hankel matrix) is the most preferable one among the three. The design problems, such as the state feedback design for pole placement, the state observer design, and the low order single and multi-functional observer design, have been solved by using canonical form systems matrices. In this dissertation, a set of algorithms for solving these three design problems is developed and analysed. These algorithms are based on Hessenberg form systems matrices which are numerically more reliable to compute than the canonical form systems matrices.

  16. AutoGrow: A Novel Algorithm for Protein Inhibitor Design

    PubMed Central

    Durrant, Jacob; Amaro, Rommie E.; McCammon, J. Andrew

    2009-01-01

    Due in part to the increasing availability of crystallographic protein structures as well as rapid improvements in computing power, the past few decades have seen an explosion in the field of computer-based rational drug design. Several algorithms have been developed to identify or generate potential ligands in silico by optimizing the ligand-receptor hydrogen bond, electrostatic, and hydrophobic interactions. We here present AutoGrow, a novel computer-aided drug design algorithm that combines the strengths of both fragment-based growing and docking algorithms. To validate AutoGrow, we recreate three crystallographically resolved ligands from their constituent fragments. PMID:19207419

  17. Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces.

    PubMed

    Dangi, Siddharth; Orsborn, Amy L; Moorman, Helene G; Carmena, Jose M

    2013-07-01

    Closed-loop decoder adaptation (CLDA) is an emerging paradigm for achieving rapid performance improvements in online brain-machine interface (BMI) operation. Designing an effective CLDA algorithm requires making multiple important decisions, including choosing the timescale of adaptation, selecting which decoder parameters to adapt, crafting the corresponding update rules, and designing CLDA parameters. These design choices, combined with the specific settings of CLDA parameters, will directly affect the algorithm's ability to make decoder parameters converge to values that optimize performance. In this article, we present a general framework for the design and analysis of CLDA algorithms and support our results with experimental data of two monkeys performing a BMI task. First, we analyze and compare existing CLDA algorithms to highlight the importance of four critical design elements: the adaptation timescale, selective parameter adaptation, smooth decoder updates, and intuitive CLDA parameters. Second, we introduce mathematical convergence analysis using measures such as mean-squared error and KL divergence as a useful paradigm for evaluating the convergence properties of a prototype CLDA algorithm before experimental testing. By applying these measures to an existing CLDA algorithm, we demonstrate that our convergence analysis is an effective analytical tool that can ultimately inform and improve the design of CLDA algorithms.

  18. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    issues in the GA, it is possible to have idle processors. However, as long as the load at each processing node is similar, the processors are kept busy nearly all of the time. In applying GAs to circuit design, a suitable genetic representation 'is that of a circuit-construction program. We discuss one such circuit-construction programming language and show how evolution can generate useful analog circuit designs. This language has the desirable property that virtually all sets of combinations of primitives result in valid circuit graphs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm and circuit simulation software, we present experimental results as applied to three analog filter and two amplifier design tasks. For example, a figure shows an 85 dB amplifier design evolved by our system, and another figure shows the performance of that circuit (gain and frequency response). In all tasks, our system is able to generate circuits that achieve the target specifications.

  19. Application of Simulated Annealing and Related Algorithms to TWTA Design

    NASA Technical Reports Server (NTRS)

    Radke, Eric M.

    2004-01-01

    decremented and the process repeats. Eventually (and hopefully), a near-globally optimal solution is attained as T approaches zero. Several exciting variants of SA have recently emerged, including Discrete-State Simulated Annealing (DSSA) and Simulated Tempering (ST). The DSSA algorithm takes the thermodynamic analogy one step further by categorizing objective function evaluations into discrete states. In doing so, many of the case-specific problems associated with fine-tuning the SA algorithm can be avoided; for example, theoretical approximations for the initial and final temperature can be derived independently of the case. In this manner, DSSA provides a scheme that is more robust with respect to widely differing design surfaces. ST differs from SA in that the temperature T becomes an additional random variable in the optimization. The system is also kept in equilibrium as the temperature changes, as opposed to the system being driven out of equilibrium as temperature changes in SA. ST is designed to overcome obstacles in design surfaces where numerous local minima are separated by high barriers. These algorithms are incorporated into the optimal design of the traveling-wave tube amplifier (TWTA). The area under scrutiny is the collector, in which it would be ideal to use negative potential to decelerate the spent electron beam to zero kinetic energy just as it reaches the collector surface. In reality this is not plausible due to a number of physical limitations, including repulsion and differing levels of kinetic energy among individual electrons. Instead, the collector is designed with multiple stages depressed below ground potential. The design of this multiple-stage collector is the optimization problem of interest. One remaining problem in SA and DSSA is the difficulty in determining when equilibrium has been reached so that the current Markov chain can be terminated. It has been suggested in recent literature that simulating the thermodynamic properties opecific

  20. Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.

    PubMed

    Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S

    2017-02-13

    Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.

  1. Acoustic design of rotor blades using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Han, A. Y.; Crossley, W. A.

    1995-01-01

    A genetic algorithm coupled with a simplified acoustic analysis was used to generate low-noise rotor blade designs. The model includes thickness, steady loading and blade-vortex interaction noise estimates. The paper presents solutions for several variations in the fitness function, including thickness noise only, loading noise only, and combinations of the noise types. Preliminary results indicate that the analysis provides reasonable assessments of the noise produced, and that genetic algorithm successfully searches for 'good' designs. The results show that, for a given required thrust coefficient, proper blade design can noticeably reduce the noise produced at some expense to the power requirements.

  2. Optimal fractional order PID design via Tabu Search based algorithm.

    PubMed

    Ateş, Abdullah; Yeroglu, Celaleddin

    2016-01-01

    This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method.

  3. An Analysis of Algorithmic Processes and Instructional Design.

    ERIC Educational Resources Information Center

    Schmid, Richard F.; Gerlach, Vernon S.

    1986-01-01

    Describes algorithms and shows how they can be applied to the design of instructional systems by relating them to a standard information processing model. Two studies are briefly described which tested serial and parallel processing in learning and offered guidelines for designers. Future research needs are also discussed. (LRW)

  4. Evolutionary algorithms applied to reliable communication network design

    NASA Astrophysics Data System (ADS)

    Nesmachnow, Sergio; Cancela, Hector; Alba, Enrique

    2007-10-01

    Several evolutionary algorithms (EAs) applied to a wide class of communication network design problems modelled under the generalized Steiner problem (GSP) are evaluated. In order to provide a fault-tolerant design, a solution to this problem consists of a preset number of independent paths linking each pair of potentially communicating terminal nodes. This usually requires considering intermediate non-terminal nodes (Steiner nodes), which are used to ensure path redundancy, while trying to minimize the overall cost. The GSP is an NP-hard problem for which few algorithms have been proposed. This article presents a comparative study of pure and hybrid EAs applied to the GSP, codified over MALLBA, a general purpose library for combinatorial optimization. The algorithms were tested on several GSPs, and asset efficient numerical results are reported for both serial and distributed models of the evaluated algorithms.

  5. A strategy for quantum algorithm design assisted by machine learning

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung

    2014-07-01

    We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.

  6. An iterative algorithm combining model reduction and control design

    NASA Technical Reports Server (NTRS)

    Hsieh, C.; Kim, J. H.; Zhu, G.; Liu, K.; Skelton, R. E.

    1990-01-01

    A design strategy which integrates model reduction by modal cost analysis and a multiobjective controller design is proposed. The necessary modeling and control algorithms are easily programmed in Matlab standard software. Hence, this method is very practical for controller design for large space structures. The design algorithm also solves the very important problem of tuning multiple loop controllers (multi-input, multi-output, or MIMO). Instead of the single gain change that is used in standard root locus and gain and phase margin theories, this method tunes multiple loop controllers from low to high gain in a systematic way in the design procedure. This design strategy is applied to NASA's Mini-Mast system.

  7. Design of an acoustic metamaterial lens using genetic algorithms.

    PubMed

    Li, Dennis; Zigoneanu, Lucian; Popa, Bogdan-Ioan; Cummer, Steven A

    2012-10-01

    The present work demonstrates a genetic algorithm approach to optimizing the effective material parameters of an acoustic metamaterial. The target device is an acoustic gradient index (GRIN) lens in air, which ideally possesses a maximized index of refraction, minimized frequency dependence of the material properties, and minimized acoustic impedance mismatch. Applying this algorithm results in complex designs with certain common features, and effective material properties that are better than those present in previous designs. After modifying the optimized unit cell designs to make them suitable for fabrication, a two-dimensional lens was built and experimentally tested. Its performance was in good agreement with simulations. Overall, the optimization approach was able to improve the refractive index but at the cost of increased frequency dependence. The optimal solutions found by the algorithm provide a numerical description of how the material parameters compete with one another and thus describes the level of performance achievable in the GRIN lens.

  8. Space shuttle configuration accounting functional design specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.

  9. On constructing optimistic simulation algorithms for the discrete event system specification

    SciTech Connect

    Nutaro, James J

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.

  10. An optimal structural design algorithm using optimality criteria

    NASA Technical Reports Server (NTRS)

    Taylor, J. E.; Rossow, M. P.

    1976-01-01

    An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.

  11. Design and analysis of Galileo sun acquisition algorithm

    NASA Technical Reports Server (NTRS)

    Lin, H.-S.

    1981-01-01

    The Galileo sun acquisition algorithm is used to align the spacecraft antenna with the sun in order to determine spacecraft attitude. It is also used to estimate the spin rate when the spacecraft antenna is not sun oriented, and is capable of performing a rhumb line turn maneuver in the case of two gyro failures. The design of the algorithm is presented in detail along with software implementation at the flowchart level. The six major portions of the algorithm are considered: initialization, sensor measurement mapping, path selection logic, sun detection logic, termination logic, and burn command generation. Analysis is performed to determine the major parameters of the algorithm, and results are verified by computer simulations.

  12. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    PubMed

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

  13. Food Design Thinking: A Branch of Design Thinking Specific to Food Design

    ERIC Educational Resources Information Center

    Zampollo, Francesca; Peacock, Matthew

    2016-01-01

    Is there a need for a set of methods within Design Thinking tailored specifically for the Food Design process? Is there a need for a branch of Design Thinking dedicated to Food Design alone? Chefs are not generally trained in Design or Design Thinking, and we are only just beginning to understand how they ideate and what recourses are available to…

  14. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  15. The Conceptual Design Algorithm of Inland LNG Barges

    NASA Astrophysics Data System (ADS)

    Łozowicka, Dorota; Kaup, Magdalena

    2017-03-01

    The article concerns the problem of inland waterways transport of LNG. Its aim is to present the algorithm of conceptual design of inland barges for LNG transport, intended for exploitation on European waterways. The article describes the areas where LNG barges exist, depending on the allowable operating parameters on the waterways. It presents existing architectural and construction solutions of barges for inland LNG transport, as well as the necessary equipment, due to the nature of cargo. Then the article presents the procedure of the conceptual design of LNG barges, including navigation restrictions and functional and economic criteria. The conceptual design algorithm of LGN barges, presented in the article, allows to preliminary design calculations, on the basis of which, are obtained the main dimensions and parameters of unit, depending on the transport task and the class of inland waterways, on which the transport will be realized.

  16. USING GENETIC ALGORITHMS TO DESIGN ENVIRONMENTALLY FRIENDLY PROCESSES

    EPA Science Inventory

    Genetic algorithm calculations are applied to the design of chemical processes to achieve improvements in environmental and economic performance. By finding the set of Pareto (i.e., non-dominated) solutions one can see how different objectives, such as environmental and economic ...

  17. Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness

    NASA Astrophysics Data System (ADS)

    Feng, Albert

    2002-05-01

    Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.

  18. Engineering stable cytoplasmic intrabodies with designed specificity.

    PubMed

    Donini, Marcello; Morea, Veronica; Desiderio, Angiola; Pashkoulov, Dimitre; Villani, Maria Elena; Tramontano, Anna; Benvenuto, Eugenio

    2003-07-04

    Many attempts have been made to develop antibody fragments that can be expressed in the cytoplasm ("intrabodies") in a stable and functional form. The recombinant antibody fragment scFv(F8) is characterised by peculiarly high in vitro stability and functional folding in both prokaryotic and eukaryotic cytoplasm. To dissect the relative contribution of different scFv(F8) regions to cytoplasmic stability and specificity we designed and constructed five chimeric molecules (scFv-P1 to P5) in which several groups of residues important for antigen binding in the poorly stable anti-hen egg lysozyme (HEL) scFv(D1.3) were progressively grafted onto the scFv(F8) scaffold. All five chimeric scFvs were expressed in a soluble form in the periplasm and cytoplasm of Escherichia coli. All the periplasmic oxidised forms and the scFv(P3) extracted from the cytoplasm in reducing conditions had HEL binding affinities essentially identical (K(d)=15nM) to that of the cognate scFv(D1.3) fragment (K(d)=16nM). The successful grafting of the antigen binding properties of D1.3 onto the scFv(F8) opens the road to the exploitation of this molecule as a scaffold for the reshaping of intrabodies with desired specificities to be targeted to the cytoplasm.

  19. Designing Micro- and Nanoswimmers for Specific Applications.

    PubMed

    Katuri, Jaideep; Ma, Xing; Stanton, Morgan M; Sánchez, Samuel

    2017-01-17

    Self-propelled colloids have emerged as a new class of active matter over the past decade. These are micrometer sized colloidal objects that transduce free energy from their surroundings and convert it to directed motion. The self-propelled colloids are in many ways, the synthetic analogues of biological self-propelled units such as algae or bacteria. Although they are propelled by very different mechanisms, biological swimmers are typically powered by flagellar motion and synthetic swimmers are driven by local chemical reactions, they share a number of common features with respect to swimming behavior. They exhibit run-and-tumble like behavior, are responsive to environmental stimuli, and can even chemically interact with nearby swimmers. An understanding of self-propelled colloids could help us in understanding the complex behaviors that emerge in populations of natural microswimmers. Self-propelled colloids also offer some advantages over natural microswimmers, since the surface properties, propulsion mechanisms, and particle geometry can all be easily modified to meet specific needs. From a more practical perspective, a number of applications, ranging from environmental remediation to targeted drug delivery, have been envisioned for these systems. These applications rely on the basic functionalities of self-propelled colloids: directional motion, sensing of the local environment, and the ability to respond to external signals. Owing to the vastly different nature of each of these applications, it becomes necessary to optimize the design choices in these colloids. There has been a significant effort to develop a range of synthetic self-propelled colloids to meet the specific conditions required for different processes. Tubular self-propelled colloids, for example, are ideal for decontamination processes, owing to their bubble propulsion mechanism, which enhances mixing in systems, but are incompatible with biological systems due to the toxic propulsion fuel and

  20. Designing Micro- and Nanoswimmers for Specific Applications

    PubMed Central

    2016-01-01

    Conspectus Self-propelled colloids have emerged as a new class of active matter over the past decade. These are micrometer sized colloidal objects that transduce free energy from their surroundings and convert it to directed motion. The self-propelled colloids are in many ways, the synthetic analogues of biological self-propelled units such as algae or bacteria. Although they are propelled by very different mechanisms, biological swimmers are typically powered by flagellar motion and synthetic swimmers are driven by local chemical reactions, they share a number of common features with respect to swimming behavior. They exhibit run-and-tumble like behavior, are responsive to environmental stimuli, and can even chemically interact with nearby swimmers. An understanding of self-propelled colloids could help us in understanding the complex behaviors that emerge in populations of natural microswimmers. Self-propelled colloids also offer some advantages over natural microswimmers, since the surface properties, propulsion mechanisms, and particle geometry can all be easily modified to meet specific needs. From a more practical perspective, a number of applications, ranging from environmental remediation to targeted drug delivery, have been envisioned for these systems. These applications rely on the basic functionalities of self-propelled colloids: directional motion, sensing of the local environment, and the ability to respond to external signals. Owing to the vastly different nature of each of these applications, it becomes necessary to optimize the design choices in these colloids. There has been a significant effort to develop a range of synthetic self-propelled colloids to meet the specific conditions required for different processes. Tubular self-propelled colloids, for example, are ideal for decontamination processes, owing to their bubble propulsion mechanism, which enhances mixing in systems, but are incompatible with biological systems due to the toxic

  1. Design of transonic airfoils and wings using a hybrid design algorithm

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Smith, Leigh A.

    1987-01-01

    A method has been developed for designing airfoils and wings at transonic speeds. It utilizes a hybrid design algorithm in an iterative predictor/corrector approach, alternating between analysis code and a design module. This method has been successfully applied to a variety of airfoil and wing design problems, including both transport and highly-swept fighter wing configurations. An efficient approach to viscous airfoild design and the effect of including static aeroelastic deflections in the wing design process are also illustrated.

  2. An Algorithm for the Mixed Transportation Network Design Problem.

    PubMed

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately.

  3. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  4. Penetrator reliability investigation and design exploration : from conventional design processes to innovative uncertainty-capturing algorithms.

    SciTech Connect

    Martinez-Canales, Monica L.; Heaphy, Robert; Gramacy, Robert B.; Taddy, Matt; Chiesa, Michael L.; Thomas, Stephen W.; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Trucano, Timothy Guy; Gray, Genetha Anne

    2006-11-01

    This project focused on research and algorithmic development in optimization under uncertainty (OUU) problems driven by earth penetrator (EP) designs. While taking into account uncertainty, we addressed three challenges in current simulation-based engineering design and analysis processes. The first challenge required leveraging small local samples, already constructed by optimization algorithms, to build effective surrogate models. We used Gaussian Process (GP) models to construct these surrogates. We developed two OUU algorithms using 'local' GPs (OUU-LGP) and one OUU algorithm using 'global' GPs (OUU-GGP) that appear competitive or better than current methods. The second challenge was to develop a methodical design process based on multi-resolution, multi-fidelity models. We developed a Multi-Fidelity Bayesian Auto-regressive process (MF-BAP). The third challenge involved the development of tools that are computational feasible and accessible. We created MATLAB{reg_sign} and initial DAKOTA implementations of our algorithms.

  5. Full design of fuzzy controllers using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Homaifar, Abdollah; Mccormick, ED

    1992-01-01

    This paper examines the applicability of genetic algorithms (GA) in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.

  6. Compiler writing system detail design specification. Volume 1: Language specification

    NASA Technical Reports Server (NTRS)

    Arthur, W. J.

    1974-01-01

    Construction within the Meta language for both language and target machine specification is reported. The elements of the function language as a meaning and syntax are presented, and the structure of the target language is described which represents the target dependent object text representation of applications programs.

  7. Compressive imaging system design using task-specific information.

    PubMed

    Ashok, Amit; Baheti, Pawan K; Neifeld, Mark A

    2008-09-01

    We present a task-specific information (TSI) based framework for designing compressive imaging (CI) systems. The task of target detection is chosen to demonstrate the performance of the optimized CI system designs relative to a conventional imager. In our optimization framework, we first select a projection basis and then find the associated optimal photon-allocation vector in the presence of a total photon-count constraint. Several projection bases, including principal components (PC), independent components, generalized matched-filter, and generalized Fisher discriminant (GFD) are considered for candidate CI systems, and their respective performance is analyzed for the target-detection task. We find that the TSI-optimized CI system design based on a GFD projection basis outperforms all other candidate CI system designs as well as the conventional imager. The GFD-based compressive imager yields a TSI of 0.9841 bits (out of a maximum possible 1 bit for the detection task), which is nearly ten times the 0.0979 bits achieved by the conventional imager at a signal-to-noise ratio of 5.0. We also discuss the relation between the information-theoretic TSI metric and a conventional statistical metric like probability of error in the context of the target-detection problem. It is shown that the TSI can be used to derive an upper bound on the probability of error that can be attained by any detection algorithm.

  8. Specification-based Error Recovery: Theory, Algorithms, and Usability

    DTIC Science & Technology

    2013-02-01

    The basis of the methodology is a view of the specification as a non-deterministic implementation, which may permit a high degree of non-determinism...developed, optimized and rigorously evaluated in this project. It leveraged the Alloy specification language and its SAT-based tool-set as an enabling...a high degree of non-determinism. The key insight is to use likely correct actions by an otherwise erroneous execu- tion to prune the non-determinism

  9. Using Genetic Algorithms to Converge on Molecules with Specific Properties

    NASA Astrophysics Data System (ADS)

    Foster, Stephen; Lindzey, Nathan; Rogers, Jon; West, Carl; Potter, Walt; Smith, Sean; Alexander, Steven

    2007-10-01

    Although it can be a straightforward matter to determine the properties of a molecule from its structure, the inverse problem is much more difficult. We have chosen to generate molecules by using a genetic algorithm, a computer simulation that models biological evolution and natural selection. By creating a population of randomly generated molecules, we can apply a process of selection, mutation, and recombination to ensure that the best members of the population (i.e. those molecules that possess many of the qualities we are looking for) survive, while the worst members of the population ``die.'' The best members are then modified by random mutation and by ``mating'' with other molecules to produce ``offspring.'' After many hundreds (or thousands) of iterations, one hopes that the population will get better and better---that is, that the properties of the individuals in the population will more and more closely match the properties we want.

  10. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  11. A disturbance based control/structure design algorithm

    NASA Technical Reports Server (NTRS)

    Mclaren, Mark D.; Slater, Gary L.

    1989-01-01

    Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.

  12. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  13. Orthogonalizing EM: A design-based least squares algorithm.

    PubMed

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z G

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online.

  14. Orthogonalizing EM: A design-based least squares algorithm

    PubMed Central

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

    2016-01-01

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

  15. Robust Optimization Design Algorithm for High-Frequency TWTs

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  16. Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard

    2005-01-01

    Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.

  17. Thrust vector control algorithm design for the Cassini spacecraft

    NASA Technical Reports Server (NTRS)

    Enright, Paul J.

    1993-01-01

    This paper describes a preliminary design of the thrust vector control algorithm for the interplanetary spacecraft, Cassini. Topics of discussion include flight software architecture, modeling of sensors, actuators, and vehicle dynamics, and controller design and analysis via classical methods. Special attention is paid to potential interactions with structural flexibilities and propellant dynamics. Controller performance is evaluated in a simulation environment built around a multi-body dynamics model, which contains nonlinear models of the relevant hardware and preliminary versions of supporting attitude determination and control functions.

  18. Diagonal dominance using function minimization algorithms. [multivariable control system design

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.

    1977-01-01

    A new approach to the design of multivariable control systems using the inverse Nyquist array method is proposed. The technique utilizes a conjugate direction function minimization algorithm to achieve dominance over a specified frequency range by minimizing the ratio of the moduli of the off-diagonal terms to the moduli of the diagonal term of the inverse open loop transfer function matrix. The technique is easily implemented in either a batch or interactive computer mode and will yield diagonalization when previously suggested methods fail. The proposed method has been successfully applied to design a control system for a sixteenth order state model of the F-100 turbofan engine with three inputs.

  19. Thrust vector control algorithm design for the Cassini spacecraft

    NASA Astrophysics Data System (ADS)

    Enright, Paul J.

    1993-02-01

    This paper describes a preliminary design of the thrust vector control algorithm for the interplanetary spacecraft, Cassini. Topics of discussion include flight software architecture, modeling of sensors, actuators, and vehicle dynamics, and controller design and analysis via classical methods. Special attention is paid to potential interactions with structural flexibilities and propellant dynamics. Controller performance is evaluated in a simulation environment built around a multi-body dynamics model, which contains nonlinear models of the relevant hardware and preliminary versions of supporting attitude determination and control functions.

  20. Optimal brushless DC motor design using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Rahideh, A.; Korakianitis, T.; Ruiz, P.; Keeble, T.; Rothman, M. T.

    2010-11-01

    This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using a genetic algorithm. Characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. Electrical and mechanical requirements (i.e. voltage, torque and speed) and other limitations (e.g. upper and lower limits of the motor geometries) are cast into constraints of the optimization problem. One sample case is used to illustrate the design and optimization technique.

  1. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  2. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  3. An efficient parallel algorithm for accelerating computational protein design

    PubMed Central

    Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang

    2014-01-01

    Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991

  4. Chiral metamaterial design using optimized pixelated inclusions with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Akturk, Cemal; Karaaslan, Muharrem; Ozdemir, Ersin; Ozkaner, Vedat; Dincer, Furkan; Bakir, Mehmet; Ozer, Zafer

    2015-03-01

    Chiral metamaterials have been a research area for many researchers due to their polarization rotation properties on electromagnetic waves. However, most of the proposed chiral metamaterials are designed depending on experience or time-consuming inefficient simulations. A method is investigated for designing a chiral metamaterial with a strong and natural chirality admittance by optimizing a grid of metallic pixels through both sides of a dielectric sheet placed perpendicular to the incident wave by using a genetic algorithm (GA) technique based on finite element method solver. The effective medium parameters are obtained by using constitutive equations and S parameters. The proposed methodology is very efficient for designing a chiral metamaterial with the desired effective medium parameters. By using GA-based topology, it is proven that a chiral metamaterial can be designed and manufactured more easily and with a low cost.

  5. An Adaptive Hybrid Genetic Algorithm for Improved Groundwater Remediation Design

    NASA Astrophysics Data System (ADS)

    Espinoza, F. P.; Minsker, B. S.; Goldberg, D. E.

    2001-12-01

    Identifying optimal designs for a groundwater remediation system is computationally intensive, especially for complex, nonlinear problems such as enhanced in situ bioremediation technology. To improve performance, we apply a hybrid genetic algorithm (HGA), which is a two-step solution method: a genetic algorithm (GA) for global search using the entire population and then a local search (LS) to improve search speed for only a few individuals in the population. We implement two types of HGAs: a non-adaptive HGA (NAHGA), whose operations are invariant throughout the run, and a self-adaptive HGA (SAHGA), whose operations adapt to the performance of the algorithm. The best settings of the two HGAs for optimal performance are then investigated for a groundwater remediation problem. The settings include the frequency of LS with respect to the normal GA evaluation, probability of individual selection for LS, evolution criterion for LS (Lamarckian or Baldwinian), and number of local search iterations. A comparison of the algorithms' performance under different settings will be presented.

  6. An adaptive multimeme algorithm for designing HIV multidrug therapies.

    PubMed

    Neri, Ferrante; Toivanen, Jari; Cascella, Giuseppe Leonardo; Ong, Yew-Soon

    2007-01-01

    This paper proposes a period representation for modeling the multidrug HIV therapies and an Adaptive Multimeme Algorithm (AMmA) for designing the optimal therapy. The period representation offers benefits in terms of flexibility and reduction in dimensionality compared to the binary representation. The AMmA is a memetic algorithm which employs a list of three local searchers adaptively activated by an evolutionary framework. These local searchers, having different features according to the exploration logic and the pivot rule, have the role of exploring the decision space from different and complementary perspectives and, thus, assisting the standard evolutionary operators in the optimization process. Furthermore, the AMmA makes use of an adaptation which dynamically sets the algorithmic parameters in order to prevent stagnation and premature convergence. The numerical results demonstrate that the application of the proposed algorithm leads to very efficient medication schedules which quickly stimulate a strong immune response to HIV. The earlier termination of the medication schedule leads to lesser unpleasant side effects for the patient due to strong antiretroviral therapy. A numerical comparison shows that the AMmA is more efficient than three popular metaheuristics. Finally, a statistical test based on the calculation of the tolerance interval confirms the superiority of the AMmA compared to the other methods for the problem under study.

  7. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 6 2010-10-01 2010-10-01 false Separator: Design specification. 162.050-21 Section 162... Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an.... (c) Each separator component that is a moving part must be designed so that its movement...

  8. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Separator: Design specification. 162.050-21 Section 162... Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an.... (c) Each separator component that is a moving part must be designed so that its movement...

  9. 46 CFR 162.050-21 - Separator: Design specification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 6 2014-10-01 2014-10-01 false Separator: Design specification. 162.050-21 Section 162... Separator: Design specification. (a) A separator must be designed to operate in each plane that forms an.... (c) Each separator component that is a moving part must be designed so that its movement...

  10. Design of OFDM radar pulses using genetic algorithm based techniques

    NASA Astrophysics Data System (ADS)

    Lellouch, Gabriel; Mishra, Amit Kumar; Inggs, Michael

    2016-08-01

    The merit of evolutionary algorithms (EA) to solve convex optimization problems is widely acknowledged. In this paper, a genetic algorithm (GA) optimization based waveform design framework is used to improve the features of radar pulses relying on the orthogonal frequency division multiplexing (OFDM) structure. Our optimization techniques focus on finding optimal phase code sequences for the OFDM signal. Several optimality criteria are used since we consider two different radar processing solutions which call either for single or multiple-objective optimizations. When minimization of the so-called peak-to-mean envelope power ratio (PMEPR) single-objective is tackled, we compare our findings with existing methods and emphasize on the merit of our approach. In the scope of the two-objective optimization, we first address PMEPR and peak-to-sidelobe level ratio (PSLR) and show that our approach based on the non-dominated sorting genetic algorithm-II (NSGA-II) provides design solutions with noticeable improvements as opposed to random sets of phase codes. We then look at another case of interest where the objective functions are two measures of the sidelobe level, namely PSLR and the integrated-sidelobe level ratio (ISLR) and propose to modify the NSGA-II to include a constrain on the PMEPR instead. In the last part, we illustrate via a case study how our encoding solution makes it possible to minimize the single objective PMEPR while enabling a target detection enhancement strategy, when the SNR metric would be chosen for the detection framework.

  11. Radiological containment selection, design, and specification guide

    SciTech Connect

    Brown, R.L.

    1994-11-01

    This document provides guidance to Tank Waste Remediation Systems personnel in determining what containment is appropriate for work activities, what containments are available, general applications of each, design criteria, and other information needed to make informed decisions concerning containment application.

  12. As-built design specification for MISMAP

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Cheng, D. E.; Tompkins, M. A. (Principal Investigator)

    1981-01-01

    The MISMAP program, which is part of the CLASFYT package, is described. The program is designed to compare classification values with ground truth values for a segment and produce a comparison map and summary table.

  13. Advanced Non-Linear Control Algorithms Applied to Design Highly Maneuverable Autonomous Underwater Vehicles (AUVs)

    DTIC Science & Technology

    2007-08-01

    Advanced non- linear control algorithms applied to design highly maneuverable Autonomous Underwater Vehicles (AUVs) Vladimir Djapic, Jay A. Farrell...hierarchical such that an ”inner loop” non- linear controller (outputs the appropriate thrust values) is the same for all mission scenarios while a...library of ”outer-loop” non- linear controllers are available to implement specific maneuvering scenarios. On top of the outer-loop is the mission planner

  14. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  15. Protein design algorithms predict viable resistance to an experimental antifolate.

    PubMed

    Reeve, Stephanie M; Gainza, Pablo; Frey, Kathleen M; Georgiev, Ivelin; Donald, Bruce R; Anderson, Amy C

    2015-01-20

    Methods to accurately predict potential drug target mutations in response to early-stage leads could drive the design of more resilient first generation drug candidates. In this study, a structure-based protein design algorithm (K* in the OSPREY suite) was used to prospectively identify single-nucleotide polymorphisms that confer resistance to an experimental inhibitor effective against dihydrofolate reductase (DHFR) from Staphylococcus aureus. Four of the top-ranked mutations in DHFR were found to be catalytically competent and resistant to the inhibitor. Selection of resistant bacteria in vitro reveals that two of the predicted mutations arise in the background of a compensatory mutation. Using enzyme kinetics, microbiology, and crystal structures of the complexes, we determined the fitness of the mutant enzymes and strains, the structural basis of resistance, and the compensatory relationship of the mutations. To our knowledge, this work illustrates the first application of protein design algorithms to prospectively predict viable resistance mutations that arise in bacteria under antibiotic pressure.

  16. Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.

    PubMed

    Dastmalchi, Pouya; Veronis, Georgios

    2013-12-30

    We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.

  17. IMCS reflight certification requirements and design specifications

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The requirements for reflight certification are established. Software requirements encompass the software programs that are resident in the PCC, DEP, PDSS, EC, or any related GSE. A design approach for the reflight software packages is recommended. These designs will be of sufficient detail to permit the implementation of reflight software. The PDSS/IMC Reflight Certification system provides the tools and mechanisms for the user to perform the reflight certification test procedures, test data capture, test data display, and test data analysis. The system as defined will be structured to permit maximum automation of reflight certification procedures and test data analysis.

  18. Comparison of the specificity of implantable dual chamber defibrillator detection algorithms.

    PubMed

    Hintringer, Florian; Deibl, Martina; Berger, Thomas; Pachinger, Otmar; Roithinger, Franz Xaver

    2004-07-01

    The aim of the study was to compare the specificity of dual chamber ICDs detection algorithms for correct classification of supraventricular tachyarrhythmias derived from clinical studies according to their size to detect an impact of sample size on the specificity. Furthermore, the study sought to compare the specificities of detection algorithms calculated from clinical data with the specificity calculated from simulations of tachyarrhythmias. A survey was conducted of all available sources providing data regarding the specificity of five dual chamber ICDs. The specificity was correlated with the number of patients included, number of episodes, and number of supraventricular tachyarrhythmias recorded. The simulation was performed using tachyarrhythmias recorded in the electrophysiology laboratory. The range of the number of patients included into the studies was 78-1,029, the range of the total number of episodes recorded was 362-5,788, and the range of the number of supraventricular tachyarrhythmias used for calculation of the specificity for correct detection of these arrhythmias was 100 (Biotronik) to 1662 (Medtronic). The specificity for correct detection of supraventricular tachyarrhythmias was 90% (Biotronik), 89% (ELA Medical), 89% (Guidant), 68% (Medtronic), and 76% (St. Jude Medical). There was an inverse correlation (r = -0.9, P = 0.037) between the specificity for correct classification of supraventricular tachyarrhythmias and the number of patients. The specificity for correct detection of supraventricular tachyarrhythmias calculated from the simulation after correction for the clinical prevalence of the simulated tachyarrhythmias was 95% (Biotronik), 99% (ELA Medical), 94% (Guidant), 93% (Medtronic), and 92% (St. Jude Medical). In conclusion, the specificity of ICD detection algorithms calculated from clinical studies or registries may depend on the number of patients studied. Therefore, a direct comparison between different detection algorithms

  19. Design principles and algorithms for automated air traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  20. Design Principles and Algorithms for Air Traffic Arrival Scheduling

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Itoh, Eri

    2014-01-01

    This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.

  1. Implementation of a combined algorithm designed to increase the reliability of information systems: simulation modeling

    NASA Astrophysics Data System (ADS)

    Popov, A.; Zolotarev, V.; Bychkov, S.

    2016-11-01

    This paper examines the results of experimental studies of a previously submitted combined algorithm designed to increase the reliability of information systems. The data that illustrates the organization and conduct of the studies is provided. Within the framework of a comparison of As a part of the study conducted, the comparison of the experimental data of simulation modeling and the data of the functioning of the real information system was made. The hypothesis of the homogeneity of the logical structure of the information systems was formulated, thus enabling to reconfigure the algorithm presented, - more specifically, to transform it into the model for the analysis and prediction of arbitrary information systems. The results presented can be used for further research in this direction. The data of the opportunity to predict the functioning of the information systems can be used for strategic and economic planning. The algorithm can be used as a means for providing information security.

  2. Geometric design of mechanical linkages for contact specifications

    NASA Astrophysics Data System (ADS)

    Robson, Nina Patarinsky

    2008-10-01

    This dissertation focuses on the kinematic synthesis of mechanical linkages in order to guide an end-effortor so that it maintains contact with specified objects in its workspace. Assuming the serial chain does not have full mobility in its workspace, the contact geometry is used to determine the dimensions of the serial chain. The approach to this problem, is to use the relative curvature of the contact of the end-effector with one or more objects to define velocity and acceleration specifications for its movement. This provides kinematic constraints that are used to synthesize the dimensions of the serial chain. The mathematical formulation of the geometric design problem, leads to systems of multivariable polynomial equations, which are solved exactly using sparse matrix resultants and polynomial homotopy methods. The results from this research yield planar RR and 4R linkages that match a specified contact geometry, spatial TS, parallel RRS and perpendicular RRS linkages that have a required acceleration specification. A new strategy for a robot recovery from actuator failures is demonstrated for the Mars Exploratory Rover Arm. In extending this work to spatial serial chains, a new method based on sparse matrix resultants was developed, which solves exact synthesis problems with acceleration constraints. Further the research builds on the theoretical concepts of contact relationships for spatial movement. The connection between kinematic synthesis and contact problems and its extension to spatial synthesis are developed in this dissertation for the first time and are new contributions. The results, which rely upon the use of surface curvature effects to reduce the number of fixtures needed to immobilize an object, find applications in robot grasping and part-fixturing. The recovery strategy, presented in this research is also a new concept. The recognition that it is possible to reconfigure a crippled robotic system to achieve mission critical tasks can guide

  3. Experimental designs for small randomised clinical trials: an algorithm for choice

    PubMed Central

    2013-01-01

    Background Small clinical trials are necessary when there are difficulties in recruiting enough patients for conventional frequentist statistical analyses to provide an appropriate answer. These trials are often necessary for the study of rare diseases as well as specific study populations e.g. children. It has been estimated that there are between 6,000 and 8,000 rare diseases that cover a broad range of diseases and patients. In the European Union these diseases affect up to 30 million people, with about 50% of those affected being children. Therapies for treating these rare diseases need their efficacy and safety evaluated but due to the small number of potential trial participants, a standard randomised controlled trial is often not feasible. There are a number of alternative trial designs to the usual parallel group design, each of which offers specific advantages, but they also have specific limitations. Thus the choice of the most appropriate design is not simple. Methods PubMed was searched to identify publications about the characteristics of different trial designs that can be used in randomised, comparative small clinical trials. In addition, the contents tables from 11 journals were hand-searched. An algorithm was developed using decision nodes based on the characteristics of the identified trial designs. Results We identified 75 publications that reported the characteristics of 12 randomised, comparative trial designs that can be used in for the evaluation of therapies in orphan diseases. The main characteristics and the advantages and limitations of these designs were summarised and used to develop an algorithm that may be used to help select an appropriate design for a given clinical situation. We used examples from publications of given disease-treatment-outcome situations, in which the investigators had used a particular trial design, to illustrate the use of the algorithm for the identification of possible alternative designs. Conclusions The

  4. 46 CFR 162.050-33 - Bilge alarm: Design specification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 6 2010-10-01 2010-10-01 false Bilge alarm: Design specification. 162.050-33 Section....050-33 Bilge alarm: Design specification. (a) This section contains requirements that apply to bilge alarms. (b) Each bilge alarm must be designed to meet the requirements for an oil content meter in §...

  5. 46 CFR 162.050-33 - Bilge alarm: Design specification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Bilge alarm: Design specification. 162.050-33 Section....050-33 Bilge alarm: Design specification. (a) This section contains requirements that apply to bilge alarms. (b) Each bilge alarm must be designed to meet the requirements for an oil content meter in §...

  6. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Cargo monitor: Design specification. 162.050-25 Section....050-25 Cargo monitor: Design specification. (a) This section contains requirements that apply to cargo monitors. (b) Each monitor must be designed so that it is calibrated by a means that does not...

  7. Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2015-06-01

    Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of "unintelligent design"; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning.

  8. Optimal robust motion controller design using multiobjective genetic algorithm.

    PubMed

    Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm-differential evolution.

  9. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  10. AFEII Analog Front End Board Design Specifications

    SciTech Connect

    Rubinov, Paul; /Fermilab

    2005-04-01

    This document describes the design of the 2nd iteration of the Analog Front End Board (AFEII), which has the function of receiving charge signals from the Central Fiber Tracker (CFT) and providing digital hit pattern and charge amplitude information from those charge signals. This second iteration is intended to address limitations of the current AFE (referred to as AFEI in this document). These limitations become increasingly deleterious to the performance of the Central Fiber Tracker as instantaneous luminosity increases. The limitations are inherent in the design of the key front end chips on the AFEI board (the SVXIIe and the SIFT) and the architecture of the board itself. The key limitations of the AFEI are: (1) SVX saturation; (2) Discriminator to analog readout cross talk; (3) Tick to tick pedestal variation; and (4) Channel to channel pedestal variation. The new version of the AFE board, AFEII, addresses these limitations by use of a new chip, the TriP-t and by architectural changes, while retaining the well understood and desirable features of the AFEI board.

  11. Algorithm for Designing Nanoscale Supramolecular Therapeutics with Increased Anticancer Efficacy.

    PubMed

    Kulkarni, Ashish; Pandey, Prithvi; Rao, Poornima; Mahmoud, Ayaat; Goldman, Aaron; Sabbisetti, Venkata; Parcha, Shashikanth; Natarajan, Siva Kumar; Chandrasekar, Vineethkrishna; Dinulescu, Daniela; Roy, Sudip; Sengupta, Shiladitya

    2016-09-27

    In the chemical world, evolution is mirrored in the origin of nanoscale supramolecular structures from molecular subunits. The complexity of function acquired in a supramolecular system over a molecular subunit can be harnessed in the treatment of cancer. However, the design of supramolecular nanostructures is hindered by a limited atomistic level understanding of interactions between building blocks. Here, we report the development of a computational algorithm, which we term Volvox after the first multicellular organism, that sequentially integrates quantum mechanical energy-state- and force-field-based models with large-scale all-atomistic explicit water molecular dynamics simulations to design stable nanoscale lipidic supramolecular structures. In one example, we demonstrate that Volvox enables the design of a nanoscale taxane supramolecular therapeutic. In another example, we demonstrate that Volvox can be extended to optimizing the ratio of excipients to form a stable nanoscale supramolecular therapeutic. The nanoscale taxane supramolecular therapeutic exerts greater antitumor efficacy than a clinically used taxane in vivo. Volvox can emerge as a powerful tool in the design of nanoscale supramolecular therapeutics for effective treatment of cancer.

  12. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  13. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

    PubMed

    Veladi, H

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.

  14. Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion

    NASA Technical Reports Server (NTRS)

    DSouza, Christopher; Weeks, Michael

    2010-01-01

    The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these

  15. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  16. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  17. Advanced algorithms for radiographic material discrimination and inspection system design

    NASA Astrophysics Data System (ADS)

    Gilbert, Andrew J.; McDonald, Benjamin S.; Deinert, Mark R.

    2016-10-01

    X-ray and neutron radiography are powerful tools for non-invasively inspecting the interior of objects. However, current methods are limited in their ability to differentiate materials when multiple materials are present, especially within large and complex objects. Past work has demonstrated that the spectral shift that X-ray beams undergo in traversing an object can be used to detect and quantify nuclear materials. The technique uses a spectrally sensitive detector and an inverse algorithm that varies the composition of the object until the X-ray spectrum predicted by X-ray transport matches the one measured. Here we show that this approach can be adapted to multi-mode radiography, with energy integrating detectors, and that the Cramér-Rao lower bound can be used to choose an optimal set of inspection modes a priori. We consider multi-endpoint X-ray radiography alone, or in combination with neutron radiography using deuterium-deuterium (DD) or deuterium-tritium (DT) sources. We show that for an optimal mode choice, the algorithm can improve discrimination between high-Z materials, specifically between tungsten and plutonium, and estimate plutonium mass within a simulated nuclear material storage system to within 1%.

  18. Genetic algorithms for the design of looped irrigation water distribution networks

    NASA Astrophysics Data System (ADS)

    Reca, Juan; MartíNez, Juan

    2006-05-01

    A new computer model called Genetic Algorithm Pipe Network Optimization Model (GENOME) has been developed with the aim of optimizing the design of new looped irrigation water distribution networks. The model is based on a genetic algorithm method, although relevant modifications and improvements have been implemented to adapt the model to this specific problem. It makes use of the robust network solver EPANET. The model has been tested and validated by applying it to the least cost optimization of several benchmark networks reported in the literature. The results obtained with GENOME have been compared with those found in previous works, obtaining the same results as the best published in the literature to date. Once the model was validated, the optimization of a real complex irrigation network has been carried out to evaluate the potential of the genetic algorithm for the optimal design of large-scale networks. Although satisfactory results have been obtained, some adjustments would be desirable to improve the performance of genetic algorithms when the complexity of the network requires it.

  19. Controller design based on μ analysis and PSO algorithm.

    PubMed

    Lari, Ali; Khosravi, Alireza; Rajabi, Farshad

    2014-03-01

    In this paper an evolutionary algorithm is employed to address the controller design problem based on μ analysis. Conventional solutions to μ synthesis problem such as D-K iteration method often lead to high order, impractical controllers. In the proposed approach, a constrained optimization problem based on μ analysis is defined and then an evolutionary approach is employed to solve the optimization problem. The goal is to achieve a more practical controller with lower order. A benchmark system named two-tank system is considered to evaluate performance of the proposed approach. Simulation results show that the proposed controller performs more effective than high order H(∞) controller and has close responses to the high order D-K iteration controller as the common solution to μ synthesis problem.

  20. Sampling-based algorithms for analysis and design of hybrid and embedded systems

    NASA Astrophysics Data System (ADS)

    Bhatia, Amit

    This dissertation considers the problem of safety analysis of hybrid and embedded systems using sampling-based incremental search algorithms. The safety specifications are a set of conditions that the states (or the trajectories) of the system must satisfy for the system to be considered safe. The safety analysis problem is known to be undecidable for dynamical systems. Most of the existing approaches for analyzing the safety specifications of a dynamical system are liable to give inconclusive results in general. This is because of the fact that each of these approaches can either only construct a safety certificate for a safe system, or, a feasible counterexample for an unsafe system. Sampling-based incremental search algorithms have been very successful for motion planning problems in robotics and the counterexample generation problem for dynamical systems. In this dissertation, we propose a novel approach that uses sampling-based incremental search algorithms to search for feasible counterexamples to safety and uses the sampled trajectories to construct a safety certificate in case no counterexample is found. We do so by introducing a notion of completeness for such algorithms that we call as resolution completeness. A sampling-based algorithm is called resolution-complete for safety analysis of a given system, if for any given resolution of controls it is guaranteed to terminate, producing, either a feasible counterexample to safety or a certificate that guarantees safe behavior of the system at the given resolution. We propose a variety of sampling-based resolution-complete algorithms for safety analysis of hybrid and embedded systems. The algorithms construct feasible trajectories at increasing levels of resolution of the controls and use structural properties of the system to make reachability claims for states in the neighborhood of the constructed trajectories. Conditions guaranteeing completeness of the proposed algorithms are derived for the case of

  1. Gateway design specification for fiber optic local area networks

    NASA Technical Reports Server (NTRS)

    1985-01-01

    This is a Design Specification for a gateway to interconnect fiber optic local area networks (LAN's). The internetworking protocols for a gateway device that will interconnect multiple local area networks are defined. This specification serves as input for preparation of detailed design specifications for the hardware and software of a gateway device. General characteristics to be incorporated in the gateway such as node address mapping, packet fragmentation, and gateway routing features are described.

  2. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  3. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  4. UXO Engineering Design. Technical Specification and ConceptualDesign

    SciTech Connect

    Beche, J-F.; Doolittle, L.; Greer, J.; Lafever, R.; Radding, Z.; Ratti, A.; Yaver, H.; Zimmermann, S.

    2005-04-23

    The design and fabrication of the UXO detector has numerous challenges and is an important component to the success of this study. This section describes the overall engineering approach, as well as some of the technical details that brought us to the present design. In general, an array of sensor coils is measuring the signal generated by the UXO object in response to a stimulation provided by the driver coil. The information related to the location, shape and properties of the object is derived from the analysis of the measured data. Each sensor coil is instrumented with a waveform digitizer operating at a nominal digitization rate of 100 kSamples per second. The sensor coils record both the large transient pulse of the driver coil and the UXO object response pulse. The latter is smaller in amplitude and must be extracted from the large transient signal. The resolution required is 16 bits over a dynamic range of at least 140 dB. The useful signal bandwidth of the application extends from DC to 40 kHz. The low distortion of each component is crucial in order to maintain an excellent linearity over the full dynamic range and to minimize the calibration procedure. The electronics must be made as compact as possible so that the response of its metallic parts has a minimum signature response. Also because of a field system portability requirement, the power consumption of the instrument must be kept as low as possible. The theory and results of numerical and experimental studies that led to the proof-of-principle multitransmitter-multireceiver Active ElectroMagnetic (AEM) system, that can not only accurately detect but also characterize and discriminate UXO targets, are summarized in LBNL report-53962: ''Detection and Classification of Buried Metallic Objects, UX-1225''.

  5. An advancing front Delaunay triangulation algorithm designed for robustness

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1992-01-01

    A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.

  6. Transitioning from conceptual design to construction performance specification

    NASA Astrophysics Data System (ADS)

    Jeffers, Paul; Warner, Mark; Craig, Simon; Hubbard, Robert; Marshall, Heather

    2012-09-01

    On successful completion of a conceptual design review by a funding agency or customer, there is a transition phase before construction contracts can be placed. The nature of this transition phase depends on the Project's approach to construction and the particular subsystem being considered. There are generically two approaches; project retention of design authority and issuance of build to print contracts, or issuance of subsystem performance specifications with controlled interfaces. This paper relates to the latter where a proof of concept (conceptual or reference design) is translated into performance based sub-system specifications for competitive tender. This translation is not a straightforward process and there are a number of different issues to consider in the process. This paper deals with primarily the Telescope mount and Enclosure subsystems. The main subjects considered in this paper are: • Typical status of design at Conceptual Design Review compared with the desired status of Specifications and Interface Control Documents at Request for Quotation. • Options for capture and tracking of system requirements flow down from science / operating requirements and sub-system requirements, and functional requirements derived from reference design. • Requirements that may come specifically from the contracting approach. • Methods for effective use of reference design work without compromising a performance based specification. • Management of project team's expectation relating to design. • Effects on cost estimates from reference design to actual. This paper is based on experience and lessons learned through this process on both the VISTA and the ATST projects.

  7. Target Impact Detection Algorithm Using Computer-aided Design (CAD) Model Geometry

    DTIC Science & Technology

    2014-09-01

    UNCLASSIFIED AD-E403 558 Technical Report ARMET-TR-13024 TARGET IMPACT DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ...DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ) MODEL GEOMETRY 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...This report documents a method and algorithm to export geometry from a three-dimensional, computer-aided design ( CAD ) model in a format that can be

  8. Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    NASA Technical Reports Server (NTRS)

    Keller, Richard M. (Editor); Barstow, David; Lowry, Michael R.; Tong, Christopher H.

    1992-01-01

    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface.

  9. SEPAC flight software detailed design specifications, volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The detailed design specifications (as built) for the SEPAC Flight Software are defined. The design includes a description of the total software system and of each individual module within the system. The design specifications describe the decomposition of the software system into its major components. The system structure is expressed in the following forms: the control-flow hierarchy of the system, the data-flow structure of the system, the task hierarchy, the memory structure, and the software to hardware configuration mapping. The component design description includes details on the following elements: register conventions, module (subroutines) invocaton, module functions, interrupt servicing, data definitions, and database structure.

  10. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  11. An implementable algorithm for the optimal design centering, tolerancing, and tuning problem

    SciTech Connect

    Polak, E.

    1982-05-01

    An implementable master algorithm for solving optimal design centering, tolerancing, and tuning problems is presented. This master algorithm decomposes the original nondifferentiable optimization problem into a sequence of ordinary nonlinear programming problems. The master algorithm generates sequences with accumulation points that are feasible and satisfy a new optimality condition, which is shown to be stronger than the one previously used for these problems.

  12. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  13. Advanced algorithms for radiographic material discrimination and inspection system design

    SciTech Connect

    Gilbert, Andrew J.; McDonald, Benjamin S.; Deinert, Mark R.

    2016-10-01

    X-ray and neutron radiography are powerful tools for non-invasively inspecting the interior of objects. Materials can be discriminated by noting how the radiographic signal changes with variations in the input spectrum or inspection mode. However, current methods are limited in their ability to differentiate when multiple materials are present, especially within large and complex objects. With X-ray radiography, the inability to distinguish materials of a similar atomic number is especially problematic. To overcome these critical limitations, we augmented our existing inverse problem framework with two important expansions: 1) adapting the previous methodology for use with multi-modal radiography and energy-integrating detectors, and 2) applying the Cramer-Rao lower bound to select an optimal set of inspection modes for a given application a priori. Adding these expanded capabilities to our algorithmic framework with adaptive regularization, we observed improved discrimination between high-Z materials, specifically plutonium and tungsten. The combined system can estimate plutonium mass within our simulated system to within 1%. Three types of inspection modes were modeled: multi-endpoint X-ray radiography alone; in combination with neutron radiography using deuterium-deuterium (DD); or in combination with neutron radiography using deuterium-tritium (DT) sources.

  14. A Learning Design Ontology Based on the IMS Specification

    ERIC Educational Resources Information Center

    Amorim, Ricardo R.; Lama, Manuel; Sanchez, Eduardo; Riera, Adolfo; Vila, Xose A.

    2006-01-01

    In this paper, we present an ontology to represent the semantics of the IMS Learning Design (IMS LD) specification, a meta-language used to describe the main elements of the learning design process. The motivation of this work relies on the expressiveness limitations found on the current XML-Schema implementation of the IMS LD conceptual model. To…

  15. Research on Knowledge Based Programming and Algorithm Design.

    DTIC Science & Technology

    1981-08-01

    34prime finding" (including the Sieve of Eratosthenes and linear time prime finding). This research is described in sections 6,7,8, and 9. 4 ii. Summary of...algorithm and several variants on prime finding including the Sieve of Eratosthenes and a more sophisticated linear-time algorithm. In these additional

  16. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  17. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  18. The potential of genetic algorithms for conceptual design of rotor systems

    NASA Technical Reports Server (NTRS)

    Crossley, William A.; Wells, Valana L.; Laananen, David H.

    1993-01-01

    The capabilities of genetic algorithms as a non-calculus based, global search method make them potentially useful in the conceptual design of rotor systems. Coupling reasonably simple analysis tools to the genetic algorithm was accomplished, and the resulting program was used to generate designs for rotor systems to match requirements similar to those of both an existing helicopter and a proposed helicopter design. This provides a comparison with the existing design and also provides insight into the potential of genetic algorithms in design of new rotors.

  19. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C

    2015-03-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.

  20. Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...

  1. Introduction to Psychology and Leadership. Design Specifications Document Including Specifications for Product and Course Design System Management and Evaluation Procedures.

    ERIC Educational Resources Information Center

    Westinghouse Learning Corp., Annapolis, MD.

    The design specifications for the United States Naval Academy leadership course developed by Westinghouse Learning Corporation are presented in this report, covering course system design, management, and evaluation. EM 010 418 through EM 010 447 and EM 010 451 through EM 010 512 are related documents, with the final report appearing under EM 010…

  2. Designing Domain-Specific HUMS Architectures: An Automated Approach

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Agarwal, Neha; Kumar, Pramod; Sundaram, Parthiban

    2004-01-01

    The HUMS automation system automates the design of HUMS architectures. The automated design process involves selection of solutions from a large space of designs as well as pure synthesis of designs. Hence the whole objective is to efficiently search for or synthesize designs or parts of designs in the database and to integrate them to form the entire system design. The automation system adopts two approaches in order to produce the designs: (a) Bottom-up approach and (b) Top down approach. Both the approaches are endowed with a Suite of quantitative and quantitative techniques that enable a) the selection of matching component instances, b) the determination of design parameters, c) the evaluation of candidate designs at component-level and at system-level, d) the performance of cost-benefit analyses, e) the performance of trade-off analyses, etc. In short, the automation system attempts to capitalize on the knowledge developed from years of experience in engineering, system design and operation of the HUMS systems in order to economically produce the most optimal and domain-specific designs.

  3. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  4. Design principles of regulatory networks: searching for the molecular algorithms of the cell.

    PubMed

    Lim, Wendell A; Lee, Connie M; Tang, Chao

    2013-01-24

    A challenge in biology is to understand how complex molecular networks in the cell execute sophisticated regulatory functions. Here we explore the idea that there are common and general principles that link network structures to biological functions, principles that constrain the design solutions that evolution can converge upon for accomplishing a given cellular task. We describe approaches for classifying networks based on abstract architectures and functions, rather than on the specific molecular components of the networks. For any common regulatory task, can we define the space of all possible molecular solutions? Such inverse approaches might ultimately allow the assembly of a design table of core molecular algorithms that could serve as a guide for building synthetic networks and modulating disease networks.

  5. Design Principles of Regulatory Networks: Searching for the Molecular Algorithms of the Cell

    PubMed Central

    Lim, Wendell A.; Lee, Connie M.; Tang, Chao

    2013-01-01

    A challenge in biology is to understand how complex molecular networks in the cell execute sophisticated regulatory functions. Here we explore the idea that there are common and general principles that link network structures to biological functions, principles that constrain the design solutions that evolution can converge upon for accomplishing a given cellular task. We describe approaches for classifying networks based on abstract architectures and functions, rather than on the specific molecular components of the networks. For any common regulatory task, can we define the space of all possible molecular solutions? Such inverse approaches might ultimately allow the assembly of a design table of core molecular algorithms that could serve as a guide for building synthetic networks and modulating disease networks. PMID:23352241

  6. Design of asynchronous phase detection algorithms optimized for wide frequency response.

    PubMed

    Crespo, Daniel; Quiroga, Juan Antonio; Gomez-Pedrero, Jose Antonio

    2006-06-10

    In many fringe pattern processing applications the local phase has to be obtained from a sinusoidal irradiance signal with unknown local frequency. This process is called asynchronous phase demodulation. Existing algorithms for asynchronous phase detection, or asynchronous algorithms, have been designed to yield no algebraic error in the recovered value of the phase for any signal frequency. However, each asynchronous algorithm has a characteristic frequency response curve. Existing asynchronous algorithms present a range of frequencies with low response, reaching zero for particular values of the signal frequency. For real noisy signals, low response implies a low signal-to-noise ratio in the recovered phase and therefore unreliable results. We present a new Fourier-based methodology for designing asynchronous algorithms with any user-defined frequency response curve and known limit of algebraic error. We show how asynchronous algorithms designed with this method can have better properties for real conditions of noise and signal frequency variation.

  7. A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme

    2014-03-01

    Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

  8. Design methodology for optimal hardware implementation of wavelet transform domain algorithms

    NASA Astrophysics Data System (ADS)

    Johnson-Bey, Charles; Mickens, Lisa P.

    2005-05-01

    The work presented in this paper lays the foundation for the development of an end-to-end system design methodology for implementing wavelet domain image/video processing algorithms in hardware using Xilinx field programmable gate arrays (FPGAs). With the integration of the Xilinx System Generator toolbox, this methodology will allow algorithm developers to design and implement their code using the familiar MATLAB/Simulink development environment. By using this methodology, algorithm developers will not be required to become proficient in the intricacies of hardware design, thus reducing the design cycle and time-to-market.

  9. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks †

    PubMed Central

    Imran, Muhammad; Zafar, Nazir Ahmad

    2012-01-01

    Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  10. Fiber-optic probe design and optical property recovery algorithm for optical biopsy of brain tissue.

    PubMed

    Cappon, Derek J; Farrell, Thomas J; Fang, Qiyin; Hayward, Joseph E

    2013-10-01

    Optical biopsy techniques offer a minimally invasive, real-time alternative to traditional biopsy and pathology during tumor resection surgery. Diffuse reflectance spectroscopy (DRS) is a commonly used technique in optical biopsy. Optical property recovery from spatially resolved DRS data allows quantification of the scattering and absorption properties of tissue. Monte Carlo simulation methods were used to evaluate a unique fiber-optic probe design for a DRS instrument to be used specifically for optical biopsy of the brain. The probe diameter was kept to a minimum to allow usage in small surgical cavities at least 1 cm in diameter. Simulations showed that the close proximity of fibers to the edge of the probe resulted in boundary effects due to reflection of photons from the surrounding air-tissue interface. A new algorithm for rapid optical property recovery was developed that accounts for this reflection and therefore overcomes these effects. The parameters of the algorithm were adjusted for use over the wide range of optical properties encountered in brain tissue, and its precision was evaluated by subjecting it to random noise. This algorithm can be adapted to work with any probe geometry to allow optical property recovery in small surgical cavities.

  11. Design specifications for manufacturability of MCM-C multichip modules

    SciTech Connect

    Allen, C.; Blazek, R.; Desch, J.; Elarton, J.; Kautz, D.; Markley, D.; Morgenstern, H.; Stewart, R.; Warner, L.

    1995-06-01

    The scope of this document is to establish design guidelines for electronic circuitry packaged as multichip modules of the ceramic substrate variety, although many of these guidelines are applicable to other types of multichip modules. The guidelines begin with prerequisite information which must be developed between customer and designer of the multichip module. The core of the guidelines focuses on the many considerations that must be addressed during the multichip module design. The guidelines conclude with the resulting deliverables from the design which satisfy customer requirements and/or support the multichip module fabrication and testing processes. Considerable supporting information, checklists, and design constraints are captured in specific appendices and used as reference information in the main body text. Finally some real examples of multichip module design are presented.

  12. The design and implementation of MPI master-slave parallel genetic algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Shuping; Cheng, Yanliu

    2013-03-01

    In this paper, the MPI master-slave parallel genetic algorithm is implemented by analyzing the basic genetic algorithm and parallel MPI program, and building a Linux cluster. This algorithm is used for the test of maximum value problems (Rosen brocks function) .And we acquire the factors influencing the master-slave parallel genetic algorithm by deriving from the analysis of test data. The experimental data shows that the balanced hardware configuration and software design optimization can improve the performance of system in the complexity of the computing environment using the master-slave parallel genetic algorithms.

  13. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  14. Force limit specifications vs. design limit loads in vibration testing

    NASA Technical Reports Server (NTRS)

    Chang, K. Y.

    2000-01-01

    The purpose of the work presented herein is to discuss the results of force limit notching during vibration testing with respect to the traditional limit load design criteria. By using a single-degree-of-freedom (SDOF) system approach, this work shows that with an appropriate force specification the notched response due to force limiting will result in loads comparable with the structural design limit criteria.

  15. Using a Genetic Algorithm to Design Nuclear Electric Spacecraft

    NASA Technical Reports Server (NTRS)

    Pannell, William P.

    2003-01-01

    The basic approach to to design nuclear electric spacecraft is to generate a group of candidate designs, see how "fit" the design are, and carry best design forward to the next generation. Some designs eliminated, some randomly modified and carried forward.

  16. Utility of gene-specific algorithms for predicting pathogenicity of uncertain gene variants

    PubMed Central

    Lyon, Elaine; Williams, Marc S; Narus, Scott P; Facelli, Julio C; Mitchell, Joyce A

    2011-01-01

    The rapid advance of gene sequencing technologies has produced an unprecedented rate of discovery of genome variation in humans. A growing number of authoritative clinical repositories archive gene variants and disease phenotypes, yet there are currently many more gene variants that lack clear annotation or disease association. To date, there has been very limited coverage of gene-specific predictors in the literature. Here the evaluation is presented of “gene-specific” predictor models based on a naïve Bayesian classifier for 20 gene–disease datasets, containing 3986 variants with clinically characterized patient conditions. The utility of gene-specific prediction is then compared with “all-gene” generalized prediction and also with existing popular predictors. Gene-specific computational prediction models derived from clinically curated gene variant disease datasets often outperform established generalized algorithms for novel and uncertain gene variants. PMID:22037892

  17. Optimization of a Turboprop UAV for Maximum Loiter and Specific Power Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dinc, Ali

    2016-09-01

    In this study, a genuine code was developed for optimization of selected parameters of a turboprop engine for an unmanned aerial vehicle (UAV) by employing elitist genetic algorithm. First, preliminary sizing of a UAV and its turboprop engine was done, by the code in a given mission profile. Secondly, single and multi-objective optimization were done for selected engine parameters to maximize loiter duration of UAV or specific power of engine or both. In single objective optimization, as first case, UAV loiter time was improved with an increase of 17.5% from baseline in given boundaries or constraints of compressor pressure ratio and burner exit temperature. In second case, specific power was enhanced by 12.3% from baseline. In multi-objective optimization case, where previous two objectives are considered together, loiter time and specific power were increased by 14.2% and 9.7% from baseline respectively, for the same constraints.

  18. Specification, Design, and Analysis of Advanced HUMS Architectures

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    2004-01-01

    During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the

  19. Thermoluminescence curves simulation using genetic algorithm with factorial design

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-05-01

    The evolutionary approach is an effective optimization tool for numeric analysis of thermoluminescence (TL) processes to assess the microparameters of kinetic models and to determine its effects on the shape of TL peaks. In this paper, the procedure for tuning of genetic algorithm (GA) is presented. This approach is based on multifactorial experiment and allows choosing intrinsic mechanisms of evolutionary operators which provide the most efficient algorithm performance. The proposed method is tested by considering the “one trap-one recombination center” (OTOR) model as an example and advantages for approximation of experimental TL curves are shown.

  20. NASA software specification and evaluation system design, part 2

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A survey and analysis of the existing methods, tools and techniques employed in the development of software are presented along with recommendations for the construction of reliable software. Functional designs for software specification language, and the data base verifier are presented.

  1. Specifications for the Design of the School Site.

    ERIC Educational Resources Information Center

    Brubaker, C. William

    1986-01-01

    The school site design is now seen as part of community planning. Recreational, cultural, social, and educational facilities for adults are included in educational specifications. Three illustrations and two photographs demonstrated attractive school sites. Site area requirements for a new high school in Santa Fe, New Mexico are summarized. (MLF)

  2. A new training algorithm using artificial neural networks to classify gender-specific dynamic gait patterns.

    PubMed

    Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim

    2015-01-01

    The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.

  3. DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...

  4. Efficient multi-value connected component labeling algorithm and its ASIC design

    NASA Astrophysics Data System (ADS)

    Sang, Hongshi; Zhang, Jing; Zhang, Tianxu

    2007-12-01

    An efficient connected component labeling algorithm for multi-value image is proposed in this paper. The algorithm is simple and inerratic suitable for hardware design. A one-dimensional array is used to store equivalence pairs. Record organization of equivalence table is advantageously to find the minimum equivalent label, and can shrink time on processing equivalence table. A pipelined architecture of the algorithm is described to enhance system performance.

  5. A design guide and specification for small explosive containment structures

    SciTech Connect

    Marchand, K.A.; Cox, P.A.; Polcyn, M.A.

    1994-12-01

    The design of structural containments for testing small explosive devices requires the designer to consider the various aspects of the explosive loading, i.e., shock and gas or quasistatic pressure. Additionally, if the explosive charge has the potential of producing damaging fragments, provisions must be made to arrest the fragments. This may require that the explosive be packed in a fragment attenuating material, which also will affect the loads predicted for containment response. Material also may be added just to attenuate shock, in the absence of fragments. Three charge weights are used in the design. The actual charge is used to determine a design fragment. Blast loads are determined for a {open_quotes}design charge{close_quotes}, defined as 125% of the operational charge in the explosive device. No yielding is permitted at the design charge weight. Blast loads are also determined for an over-charge, defined as 200% of the operational charge in the explosive device. Yielding, but no failure, is permitted at this over-charge. This guide emphasizes the calculation of loads and fragments for which the containment must be designed. The designer has the option of using simplified or complex design-analysis methods. Examples in the guide use readily available single degree-of-freedom (sdof) methods, plus static methods for equivalent dynamic loads. These are the common methods for blast resistant design. Some discussion of more complex methods is included. Generally, the designer who chooses more complex methods must be fully knowledgeable in their use and limitations. Finally, newly fabricated containments initially must be proof tested to 125% of the operational load and then inspected at regular intervals. This specification provides guidance for design, proof testing, and inspection of small explosive containment structures.

  6. Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE

    PubMed Central

    Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.

    2009-01-01

    PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842

  7. An evaluation of Z-transform algorithms for identifying subject-specific abnormalities in neuroimaging data.

    PubMed

    Mayer, Andrew R; Dodd, Andrew B; Ling, Josef M; Wertz, Christopher J; Shaff, Nicholas A; Bedrick, Edward J; Viamonte, Carlo

    2017-03-20

    The need for algorithms that capture subject-specific abnormalities (SSA) in neuroimaging data is increasingly recognized across many neuropsychiatric disorders. However, the effects of initial distributional properties (e.g., normal versus non-normally distributed data), sample size, and typical preprocessing steps (spatial normalization, blurring kernel and minimal cluster requirements) on SSA remain poorly understood. The current study evaluated the performance of several commonly used z-transform algorithms [leave-one-out (LOO); independent sample (IDS); Enhanced Z-score Microstructural Assessment of Pathology (EZ-MAP); distribution-corrected z-scores (DisCo-Z); and robust z-scores (ROB-Z)] for identifying SSA using simulated and diffusion tensor imaging data from healthy controls (N = 50). Results indicated that all methods (LOO, IDS, EZ-MAP and DisCo-Z) with the exception of the ROB-Z eliminated spurious differences that are present across artificially created groups following a standard z-transform. However, LOO and IDS consistently overestimated the true number of extrema (i.e., SSA) across all sample sizes and distributions. The EZ-MAP and DisCo-Z algorithms more accurately estimated extrema across most distributions and sample sizes, with the exception of skewed distributions. DTI results indicated that registration algorithm (linear versus non-linear) and blurring kernel size differentially affected the number of extrema in positive versus negative tails. Increasing the blurring kernel size increased the number of extrema, although this effect was much more prominent when a minimum cluster volume was applied to the data. In summary, current results highlight the need to statistically compare the frequency of SSA in control samples or to develop appropriate confidence intervals for patient data.

  8. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  9. A novel algorithm of maximin Latin hypercube design using successive local enumeration

    NASA Astrophysics Data System (ADS)

    Zhu, Huaguang; Liu, Li; Long, Teng; Peng, Lei

    2012-05-01

    The design of computer experiments (DoCE) is a key technique in the field of metamodel-based design optimization. Space-filling and projective properties are desired features in DoCE. In this article, a novel algorithm of maximin Latin hypercube design (LHD) using successive local enumeration (SLE) is proposed for generating arbitrary m points in n-dimensional space. Testing results compared with lhsdesign function, binary encoded genetic algorithm (BinGA), permutation encoded genetic algorithm (PermGA) and translational propagation algorithm (TPLHD) indicate that SLE is effective to generate sampling points with good space-filling and projective properties. The accuracies of metamodels built with the sampling points produced by lhsdesign function and SLE are compared to illustrate the preferable performance of SLE. Through the comparative study on efficiency with BinGA, PermGA, and TPLHD, as a novel algorithm of LHD sampling techniques, SLE has good space-filling property and acceptable efficiency.

  10. A Generic Method for Design of Oligomer-Specific Antibodies

    PubMed Central

    Brännström, Kristoffer; Lindhagen-Persson, Malin; Gharibyan, Anna L.; Iakovleva, Irina; Vestling, Monika; Sellin, Mikael E.; Brännström, Thomas; Morozova-Roche, Ludmilla; Forsgren, Lars; Olofsson, Anders

    2014-01-01

    Antibodies that preferentially and specifically target pathological oligomeric protein and peptide assemblies, as opposed to their monomeric and amyloid counterparts, provide therapeutic and diagnostic opportunities for protein misfolding diseases. Unfortunately, the molecular properties associated with oligomer-specific antibodies are not well understood, and this limits targeted design and development. We present here a generic method that enables the design and optimisation of oligomer-specific antibodies. The method takes a two-step approach where discrimination between oligomers and fibrils is first accomplished through identification of cryptic epitopes exclusively buried within the structure of the fibrillar form. The second step discriminates between monomers and oligomers based on differences in avidity. We show here that a simple divalent mode of interaction, as within e.g. the IgG isotype, can increase the binding strength of the antibody up to 1500 times compared to its monovalent counterpart. We expose how the ability to bind oligomers is affected by the monovalent affinity and the turnover rate of the binding and, importantly, also how oligomer specificity is only valid within a specific concentration range. We provide an example of the method by creating and characterising a spectrum of different monoclonal antibodies against both the Aβ peptide and α-synuclein that are associated with Alzheimer's and Parkinson's diseases, respectively. The approach is however generic, does not require identification of oligomer-specific architectures, and is, in essence, applicable to all polypeptides that form oligomeric and fibrillar assemblies. PMID:24618582

  11. Design of Protein Multi-specificity Using an Independent Sequence Search Reduces the Barrier to Low Energy Sequences

    PubMed Central

    Sevy, Alexander M.; Jacobs, Tim M.; Crowe, James E.; Meiler, Jens

    2015-01-01

    Computational protein design has found great success in engineering proteins for thermodynamic stability, binding specificity, or enzymatic activity in a ‘single state’ design (SSD) paradigm. Multi-specificity design (MSD), on the other hand, involves considering the stability of multiple protein states simultaneously. We have developed a novel MSD algorithm, which we refer to as REstrained CONvergence in multi-specificity design (RECON). The algorithm allows each state to adopt its own sequence throughout the design process rather than enforcing a single sequence on all states. Convergence to a single sequence is encouraged through an incrementally increasing convergence restraint for corresponding positions. Compared to MSD algorithms that enforce (constrain) an identical sequence on all states the energy landscape is simplified, which accelerates the search drastically. As a result, RECON can readily be used in simulations with a flexible protein backbone. We have benchmarked RECON on two design tasks. First, we designed antibodies derived from a common germline gene against their diverse targets to assess recovery of the germline, polyspecific sequence. Second, we design “promiscuous”, polyspecific proteins against all binding partners and measure recovery of the native sequence. We show that RECON is able to efficiently recover native-like, biologically relevant sequences in this diverse set of protein complexes. PMID:26147100

  12. Design of Protein Multi-specificity Using an Independent Sequence Search Reduces the Barrier to Low Energy Sequences.

    PubMed

    Sevy, Alexander M; Jacobs, Tim M; Crowe, James E; Meiler, Jens

    2015-07-01

    Computational protein design has found great success in engineering proteins for thermodynamic stability, binding specificity, or enzymatic activity in a 'single state' design (SSD) paradigm. Multi-specificity design (MSD), on the other hand, involves considering the stability of multiple protein states simultaneously. We have developed a novel MSD algorithm, which we refer to as REstrained CONvergence in multi-specificity design (RECON). The algorithm allows each state to adopt its own sequence throughout the design process rather than enforcing a single sequence on all states. Convergence to a single sequence is encouraged through an incrementally increasing convergence restraint for corresponding positions. Compared to MSD algorithms that enforce (constrain) an identical sequence on all states the energy landscape is simplified, which accelerates the search drastically. As a result, RECON can readily be used in simulations with a flexible protein backbone. We have benchmarked RECON on two design tasks. First, we designed antibodies derived from a common germline gene against their diverse targets to assess recovery of the germline, polyspecific sequence. Second, we design "promiscuous", polyspecific proteins against all binding partners and measure recovery of the native sequence. We show that RECON is able to efficiently recover native-like, biologically relevant sequences in this diverse set of protein complexes.

  13. Use of Algorithm of Changes for Optimal Design of Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.

    2010-05-01

    For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.

  14. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be

  15. Design Genetic Algorithm Optimization Education Software Based Fuzzy Controller for a Tricopter Fly Path Planning

    ERIC Educational Resources Information Center

    Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao

    2016-01-01

    In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…

  16. Antenna Design Using the Efficient Global Optimization (EGO) Algorithm

    DTIC Science & Technology

    2011-05-20

    small antennas in a parasitic super directive array configuration. (b) A comparison of the driven super directive gain achievable with these...we discuss antenna design optimization using EGO. The first antenna design is a parasitic super directive array where we compare EGO with a classic...In Section 4 (RESULTS AND DISCUSSION) we present design optimizations for parasitic, super directive arrays; wideband antenna design; and the

  17. Comparing State-of-the-Art Evolutionary Multi-Objective Algorithms for Long-Term Groundwater Monitoring Design

    NASA Astrophysics Data System (ADS)

    Reed, P. M.; Kollat, J. B.

    2005-12-01

    This study demonstrates the effectiveness of a modified version of Deb's Non-Dominated Sorted Genetic Algorithm II (NSGAII), which the authors have named the Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (Epsilon-NSGAII), at solving a four objective long-term groundwater monitoring (LTM) design test case. The Epsilon-NSGAII incorporates prior theoretical competent evolutionary algorithm (EA) design concepts and epsilon-dominance archiving to improve the original NSGAII's efficiency, reliability, and ease-of-use. This algorithm eliminates much of the traditional trial-and-error parameterization associated with evolutionary multi-objective optimization (EMO) through epsilon-dominance archiving, dynamic population sizing, and automatic termination. The effectiveness and reliability of the new algorithm is compared to the original NSGAII as well as two other benchmark multi-objective evolutionary algorithms (MOEAs), the Epsilon-Dominance Multi-Objective Evolutionary Algorithm (Epsilon-MOEA) and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). These MOEAs have been selected because they have been demonstrated to be highly effective at solving numerous multi-objective problems. The results presented in this study indicate superior performance of the Epsilon-NSGAII in terms of the hypervolume indicator, unary Epsilon-indicator, and first-order empirical attainment function metrics. In addition, the runtime metric results indicate that the diversity and convergence dynamics of the Epsilon-NSGAII are competitive to superior relative to the SPEA2, with both algorithms greatly outperforming the NSGAII and Epsilon-MOEA in terms of these metrics. The improvements in performance of the Epsilon-NSGAII over its parent algorithm the NSGAII demonstrate that the application of Epsilon-dominance archiving, dynamic population sizing with archive injection, and automatic termination greatly improve algorithm efficiency and reliability. In addition, the usability of

  18. INCORPORATING ENVIRONMENTAL AND ECONOMIC CONSIDERATIONS INTO PROCESS DESIGN: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...

  19. Specific issues of the design for the elderly

    NASA Astrophysics Data System (ADS)

    Sebesi, S. B.; Groza, H. L.; Ianoşi, A.; Dimitrova, A.; Mândru, D.

    2016-08-01

    The actual demographic studies show that number of the elderly people is increasing constantly. Considering their motor, sensorial and cognitive constrains and restrictions, a new field of Assistive Technology is developing, focussed on the design and development of a wide range of devices, apparatus, equipment and systems dedicated to their independent and safe life. In this paper, a systematisation of existing gero-technical systems is proposed, emphasising the today trends in this field. The specific issues of designing this kind of products are identified and analysed. Two approaches of the authors are finally presented: wearable suits for aging and disabilities simulation and tele-monitoring of the elderly.

  20. Space tug thermal control. [design criteria and specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    It was determined that space tug will require the capability to perform its mission within a broad range of thermal environments with currently planned mission durations of up to seven days, so an investigation was conducted to define a thermal design for the forward and intertank compartments and fuel cell heat rejection system that satisfies tug requirements for low inclination geosynchronous deploy and retrieve missions. Passive concepts were demonstrated analytically for both the forward and intertank compartments, and a worst case external heating environment was determined for use during the study. The thermal control system specifications and designs which resulted from the research are shown.

  1. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell has been designed and tested to deliver high capacity at a C/1.5 discharge rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet made at a discharge rate this high in the 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters, performance, and future test plans are described.

  2. An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1999-01-01

    The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.

  3. Computational model design specification for Phase 1 of the Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.

    1991-07-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emission from nuclear operations at Hanford since their inception in 1944. The purpose of this report is to outline the basic algorithm and necessary computer calculations to be used to calculate radiation doses specific and hypothetical individuals in the vicinity of Hanford. The system design requirements, those things that must be accomplished, are defined. The system design specifications, the techniques by which those requirements are met, are outlined. Included are the basic equations, logic diagrams, and preliminary definition of the nature of each input distribution. 4 refs., 10 figs., 9 tabs.

  4. An effective algorithm for the generation of patient-specific Purkinje networks in computational electrocardiology

    NASA Astrophysics Data System (ADS)

    Palamara, Simone; Vergara, Christian; Faggiano, Elena; Nobile, Fabio

    2015-02-01

    The Purkinje network is responsible for the fast and coordinated distribution of the electrical impulse in the ventricle that triggers its contraction. Therefore, it is necessary to model its presence to obtain an accurate patient-specific model of the ventricular electrical activation. In this paper, we present an efficient algorithm for the generation of a patient-specific Purkinje network, driven by measures of the electrical activation acquired on the endocardium. The proposed method provides a correction of an initial network, generated by means of a fractal law, and it is based on the solution of Eikonal problems both in the muscle and in the Purkinje network. We present several numerical results both in an ideal geometry with synthetic data and in a real geometry with patient-specific clinical measures. These results highlight an improvement of the accuracy provided by the patient-specific Purkinje network with respect to the initial one. In particular, a cross-validation test shows an accuracy increase of 19% when only the 3% of the total points are used to generate the network, whereas an increment of 44% is observed when a random noise equal to 20% of the maximum value of the clinical data is added to the measures.

  5. Application of heuristic optimization techniques and algorithm tuning to multilayered sorptive barrier design.

    PubMed

    Matott, L Shawn; Bartelt-Hunt, Shannon L; Rabideau, Alan J; Fowler, K R

    2006-10-15

    Although heuristic optimization techniques are increasingly applied in environmental engineering applications, algorithm selection and configuration are often approached in an ad hoc fashion. In this study, the design of a multilayer sorptive barrier system served as a benchmark problem for evaluating several algorithm-tuning procedures, as applied to three global optimization techniques (genetic algorithms, simulated annealing, and particle swarm optimization). Each design problem was configured as a combinatorial optimization in which sorptive materials were selected for inclusion in a landfill liner to minimize the transport of three common organic contaminants. Relative to multilayer sorptive barrier design, study results indicate (i) the binary-coded genetic algorithm is highly efficient and requires minimal tuning, (ii) constraint violations must be carefully integrated to avoid poor algorithm convergence, and (iii) search algorithm performance is strongly influenced by the physical-chemical properties of the organic contaminants of concern. More generally, the results suggest that formal algorithm tuning, which has not been widely applied to environmental engineering optimization, can significantly improve algorithm performance and provide insight into the physical processes that control environmental systems.

  6. Design and FPGA implementation of real-time automatic image enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Dong, GuoWei; Hou, ZuoXun; Tang, Qi; Pan, Zheng; Li, Xin

    2016-11-01

    In order to improve image processing quality and boost processing rate, this paper proposes an real-time automatic image enhancement algorithm. It is based on the histogram equalization algorithm and the piecewise linear enhancement algorithm, and it calculate the relationship of the histogram and the piecewise linear function by analyzing the histogram distribution for adaptive image enhancement. Furthermore, the corresponding FPGA processing modules are designed to implement the methods. Especially, the high-performance parallel pipelined technology and inner potential parallel processing ability of the modules are paid more attention to ensure the real-time processing ability of the complete system. The simulations and the experimentations show that the algorithm is based on the design and implementation of FPGA hardware circuit less cost on hardware, high real-time performance, the good processing performance in different sceneries. The algorithm can effectively improve the image quality, and would have wide prospect on imaging processing field.

  7. The Design and Analysis of Efficient Learning Algorithms

    DTIC Science & Technology

    1991-01-01

    example, a car buzzer buzzes if the key is in the ignition and the door is open, or if the motor is on and the seat belt is unfastened. Let K, D, M... belt is fastened. Then the buzzer buzzes if and only if the Boolean formula (K AND D) OR (M AND NOT(S)) evaluates to true. Here, AND, OR and NOT are...early childhood development. Wilson [87] studies so-called genetic algorithms for learning by "animats" in unfamiliar environments. Kuipers and Byun

  8. Precise Specification of Design Pattern Structure and Behaviour

    NASA Astrophysics Data System (ADS)

    Sterritt, Ashley; Clarke, Siobhán; Cahill, Vinny

    Applying design patterns while developing a software system can improve its non-functional properties, such as extensibility and loose coupling. Precise specification of structure and behaviour communicates the invariants imposed by a pattern on a conforming implementation and enables formal software verification. Many existing design-pattern specification languages (DPSLs) focus on class structure alone, while those that do address behaviour suffer from a lack of expressiveness and/or imprecise semantics. In particular, in a review of existing work, three invariant categories were found to be inexpressible in state-of-the-art DPSLs: dependency, object state and data-structure. This paper presents Alas: a precise specification language that supports design-pattern descriptions including these invariant categories. The language is based on UML Class and Sequence diagrams with modified syntax and semantics. In this paper, the meaning of the presented invariants is formalized and relevant ambiguities in the UML Standard are clarified. We have evaluated Alas by specifying the widely-used Gang of Four pattern catalog and identified patterns that benefitted from the added expressiveness and semantics of Alas.

  9. On the importance of FIB-SEM specific segmentation algorithms for porous media

    SciTech Connect

    Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  10. Dye laser amplifier including a specifically designed diffuser assembly

    DOEpatents

    Davin, James; Johnston, James P.

    1992-01-01

    A large (high flow rate) dye laser amplifier in which a continuous replened supply of dye is excited by a first light beam, specifically a copper vapor laser beam, in order to amplify the intensity of a second different light beam, specifically a dye beam, passing through the dye is disclosed herein. This amplifier includes a dye cell defining a dye chamber through which a continuous stream of dye is caused to pass at a relatively high flow rate and a specifically designed diffuser assembly for slowing down the flow of dye while, at the same time, assuring that as the dye stream flows through the diffuser assembly it does so in a stable manner.

  11. Navigation Constellation Design Using a Multi-Objective Genetic Algorithm

    DTIC Science & Technology

    2015-03-26

    used include Walker constellation parameters, orbital elements, and transmit power. The results show that the constellation design tool produces...10 2.1.1 Orbit Types. .................................................................................................. 2-11 2.1.2 Astrodynamics...Constellation Design Problem ........................................................ 1-2 Figure 2-1: Classical Orbital Elements

  12. A Pareto Optimal Design Analysis of Magnetic Thrust Bearings Using Multi-Objective Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Rao, Jagu S.; Tiwari, R.

    2015-03-01

    A Pareto optimal design analysis is carried out on the design of magnetic thrust bearings using multi-objective genetic algorithms. Two configurations of bearings have been considered with the minimization of power loss and weight of the bearing as objectives for performance comparisons. A multi-objective evolutionary algorithm is utilized to generate Pareto frontiers at different operating loads. As the load increases, the Pareto frontier reduces to a single point at a peak load for both configurations. Pareto optimal design analysis is used to study characteristics of design variables and other parameters. Three distinct operating load zones have been observed.

  13. Optimisation of the design of shell and double concentric tubes heat exchanger using the Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Baadache, Khireddine; Bougriou, Chérif

    2015-10-01

    This paper presents the use of Genetic Algorithm in the sizing of the shell and double concentric tube heat exchanger where the objective function is the total cost which is the sum of the capital cost of the device and the operating cost. The use of the techno-economic methods based on the optimisation methods of heat exchangers sizing allow to have a device that satisfies the technical specification with the lowest possible levels of operating and investment costs. The logarithmic mean temperature difference method was used for the calculation of the heat exchange area. This new heat exchanger is more profitable and more economic than the old heat exchanger, the total cost decreased of about 13.16 % what represents 7,250.8 euro of the lump sum. The design modifications and the use of the Genetic Algorithm for the sizing also allow to improve the compactness of the heat exchanger, the study showed that the latter can increase the heat transfer surface area per unit volume until 340 m2/m3.

  14. Design of Clinical Support Systems Using Integrated Genetic Algorithm and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Fu; Huang, Yung-Fa; Jiang, Xiaoyi; Hsu, Yuan-Nian; Lin, Hsuan-Hung

    Clinical decision support system (CDSS) provides knowledge and specific information for clinicians to enhance diagnostic efficiency and improving healthcare quality. An appropriate CDSS can highly elevate patient safety, improve healthcare quality, and increase cost-effectiveness. Support vector machine (SVM) is believed to be superior to traditional statistical and neural network classifiers. However, it is critical to determine suitable combination of SVM parameters regarding classification performance. Genetic algorithm (GA) can find optimal solution within an acceptable time, and is faster than greedy algorithm with exhaustive searching strategy. By taking the advantage of GA in quickly selecting the salient features and adjusting SVM parameters, a method using integrated GA and SVM (IGS), which is different from the traditional method with GA used for feature selection and SVM for classification, was used to design CDSSs for prediction of successful ventilation weaning, diagnosis of patients with severe obstructive sleep apnea, and discrimination of different cell types form Pap smear. The results show that IGS is better than methods using SVM alone or linear discriminator.

  15. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  16. Optimal design of optical reference signals by use of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Saez-Landete, José; Salcedo-Sanz, Sancho; Rosa-Zurera, Manuel; Alonso, José; Bernabeu, Eusebio

    2005-10-01

    A new technique for the generation of optical reference signals with optimal properties is presented. In grating measurement systems a reference signal is needed to achieve an absolute measurement of the position. The optical signal is the autocorrelation of two codes with binary transmittance. For a long time, the design of this type of code has required great computational effort, which limits the size of the code to ˜30 elements. Recently, the application of the dividing rectangles (DIRECT) algorithm has allowed the automatic design of codes up to 100 elements. Because of the binary nature of the problem and the parallel processing of the genetic algorithms, these algorithms are efficient tools for obtaining codes with particular autocorrelation properties. We design optimum zero reference codes with arbitrary length by means of a genetic algorithm enhanced with a restricted search operator.

  17. A hybrid algorithm for transonic airfoil and wing design

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Smith, Leigh A.

    1987-01-01

    The present method for the design of transonic airfoils and wings employs a predictor/corrector approach in which an analysis code calculates the flowfield for an initial geometry, then modifies it on the basis of the difference between calculated and target pressures. This allows the design method to be straightforwardly coupled with any existing analysis code, as presently undertaken with several two- and three-dimensional potential flow codes. The results obtained indicate that the method is robust and accurate, even in the cases of airfoils with strongly supercritical flow and shocks. The design codes are noted to require computational resources typical of current pure-inverse methods.

  18. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  19. New Meta Algorithms for Engineering Design Using Surrogate Functions

    DTIC Science & Technology

    2005-04-01

    Dennis, Jr., and Parviz Moin, Opti- mal aeroacoustic shape design using the surrogate management frame- work, Optimization and Engineering, 5(2):101...122, 2004. e Alison L. Marsden, Meng Wang, J. E. Dennis, Jr., and Parviz Moin), Suppression of vortex-shedding noise via derivative-free shape opti...solution. Contact: Dr. Dominique Orban (514)340-4711 ext 5967 "* Trailing edge design We have a collaboration with Parviz Moin’s tur- bulent flow group in

  20. Clairvoyant fusion: a new methodology for designing robust detection algorithms

    NASA Astrophysics Data System (ADS)

    Schaum, Alan

    2016-10-01

    Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors ("clairvoyants") associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions-or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.

  1. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Irwin, Ryan W.; Tinker, Michael L.

    2005-01-01

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  2. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    SciTech Connect

    Irwin, Ryan W.; Tinker, Michael L.

    2005-02-06

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  3. Modular Integrated Stackable Layers (MISL) 1.1 Design Specification. Design Guideline Document

    NASA Technical Reports Server (NTRS)

    Yim, Hester J.

    2012-01-01

    This document establishes the design guideline of the Modular Instrumentation Data Acquisition (MI-DAQ) system in utilization of several designs available in EV. The MI- DAQ provides the options to the customers depending on their system requirements i.e. a 28V interface power supply, a low power battery operated system, a low power microcontroller, a higher performance microcontroller, a USB interface, a Ethernet interface, a wireless communication, various sensor interfaces, etc. Depending on customer's requirements, the each functional board can be stacked up from a bottom level of power supply to a higher level of stack to provide user interfaces. The stack up of boards are accomplished by a predefined and standardized power bus and data bus connections which are included in this document along with other physical and electrical guidelines. This guideline also provides information for a new design options. This specification is the product of a collaboration between NASA/JSC/EV and Texas A&M University. The goal of the collaboration is to open source the specification and allow outside entities to design, build, and market modules that are compatible with the specification. NASA has designed and is using numerous modules that are compatible to this specification. A limited number of these modules will also be released as open source designs to support the collaboration. The released designs are listed in the Applicable Documents.

  4. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell was designed and tested to deliver high capacity at steady discharge rates up to and including a C rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet of any type in a 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters and performance are described. Also covered is an episode of capacity fading due to electrode swelling and its successful recovery by means of additional activation procedures.

  5. Advanced Wet Tantalum Capacitors: Design, Specifications and Performance

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2017-01-01

    Insertion of new types of commercial, high volumetric efficiency wet tantalum capacitors in space systems requires reassessment of the existing quality assurance approaches that have been developed for capacitors manufactured to MIL-PRF-39006 requirements. The specifics of wet electrolytic capacitors is that leakage currents flowing through electrolyte can cause gas generation resulting in building up of internal gas pressure and rupture of the case. The risk associated with excessive leakage currents and increased pressure is greater for high value advanced wet tantalum capacitors, but it has not been properly evaluated yet. This presentation gives a review of specifics of the design, performance, and potential reliability risks associated with advanced wet tantalum capacitors. Problems related to setting adequate requirements for DPA, leakage currents, hermeticity, stability at low and high temperatures, ripple currents for parts operating in vacuum, and random vibration testing are discussed. Recommendations for screening and qualification to reduce risks of failures have been suggested.

  6. Advanced Wet Tantalum Capacitors: Design, Specifications and Performance

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2016-01-01

    Insertion of new types of commercial, high volumetric efficiency wet tantalum capacitors in space systems requires reassessment of the existing quality assurance approaches that have been developed for capacitors manufactured to MIL-PRF-39006 requirements. The specifics of wet electrolytic capacitors is that leakage currents flowing through electrolyte can cause gas generation resulting in building up of internal gas pressure and rupture of the case. The risk associated with excessive leakage currents and increased pressure is greater for high value advanced wet tantalum capacitors, but it has not been properly evaluated yet. This presentation gives a review of specifics of the design, performance, and potential reliability risks associated with advanced wet tantalum capacitors. Problems related to setting adequate requirements for DPA, leakage currents, hermeticity, stability at low and high temperatures, ripple currents for parts operating in vacuum, and random vibration testing are discussed. Recommendations for screening and qualification to reduce risks of failures have been suggested.

  7. Towards the Design of a Patient-Specific Virtual Tumour

    PubMed Central

    Caraguel, Flavien; Lesart, Anne-Cécile; Estève, François; van der Sanden, Boudewijn

    2016-01-01

    The design of a patient-specific virtual tumour is an important step towards Personalized Medicine. However this requires to capture the description of many key events of tumour development, including angiogenesis, matrix remodelling, hypoxia, and cell state heterogeneity that will all influence the tumour growth kinetics and degree of tumour invasiveness. To that end, an integrated hybrid and multiscale approach has been developed based on data acquired on a preclinical mouse model as a proof of concept. Fluorescence imaging is exploited to build case-specific virtual tumours. Numerical simulations show that the virtual tumour matches the characteristics and spatiotemporal evolution of its real counterpart. We achieved this by combining image analysis and physiological modelling to accurately described the evolution of different tumour cases over a month. The development of such models is essential since a dedicated virtual tumour would be the perfect tool to identify the optimum therapeutic strategies that would make Personalized Medicine truly reachable and achievable. PMID:28096895

  8. Designing pressure garments capable of exerting specific pressures on limbs.

    PubMed

    Macintyre, Lisa

    2007-08-01

    Pressure garments have been used prophylactically and to treat hypertrophic scars, resulting from serious burns, since the early 1970s. They are custom-made from elastic fabrics by commercial producers and hospital staff. However, no clear scientifically established method has ever been published for their design and manufacture. Previous work [2] identified the most commonly used fabrics and construction methods for the production of pressure garments by hospital staff in UK burn units. These methods were evaluated by measuring pressures delivered to both cylinder models and to human limbs using I-scan pressure sensors. A new calibration method was developed for the I-scan system to enable measurement of low interface pressures to an accuracy of +/-2.5 mmHg. The effects of cylinder/limb circumference and pressure garment design on the pressures exerted were established. These measurements confirm the limitations of current pressure garment construction methods used in UK hospitals. A new method for designing pressure garments that will exert specific known pressures is proposed and evaluated for human thighs. Evaluation of the proposed design method is ongoing for other body parts.

  9. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  10. Design and Evaluation of Tumor-Specific Dendrimer Epigenetic Therapeutics.

    PubMed

    Zong, Hong; Shah, Dhavan; Selwa, Katherine; Tsuchida, Ryan E; Rattan, Rahul; Mohan, Jay; Stein, Adam B; Otis, James B; Goonewardena, Sascha N

    2015-06-01

    Histone deacetylase inhibitors (HDACi) are promising therapeutics for cancer. HDACi alter the epigenetic state of tumors and provide a unique approach to treat cancer. Although studies with HDACi have shown promise in some cancers, variable efficacy and off-target effects have limited their use. To overcome some of the challenges of traditional HDACi, we sought to use a tumor-specific dendrimer scaffold to deliver HDACi directly to cancer cells. Here we report the design and evaluation of tumor-specific dendrimer-HDACi conjugates. The HDACi was conjugated to the dendrimer using an ester linkage through its hydroxamic acid group, inactivating the HDACi until it is released from the dendrimer. Using a cancer cell model, we demonstrate the functionality of the tumor-specific dendrimer-HDACi conjugates. Furthermore, we demonstrate that unlike traditional HDACi, dendrimer-HDACi conjugates do not affect tumor-associated macrophages, a recently recognized mechanism through which drug resistance emerges. We anticipate that this new class of cell-specific epigenetic therapeutics will have tremendous potential in the treatment of cancer.

  11. Aerodynamics Design and Genetic Algorithms for Optimization of Airship Bodies

    NASA Astrophysics Data System (ADS)

    Nejati, Vahid; Matsuuchi, Kazuo

    A special and effective aerodynamics calculation method has been applied for the flow field around a body of revolution to find the drag coefficient for a wide range of Reynolds numbers. The body profile is described by a first order continuous axial singularity distribution. The solution of the direct problem then gives the radius and inviscid velocity distribution. Viscous effects are considered by means of an integral boundary layer procedure, and for determination of the transition location the forced transition criterion is applied. By avoiding those profiles, which result in the separation of the boundary layer, the drag can be calculated at the end of the body by using Young's formula. In this study, a powerful optimization procedure known as a Genetic Algorithms (GA) is used for the first time in the shape optimization of airship hulls. GA represents a particular artificial intelligence technique for large spaces, striking a remarkable balance between exploration and exploitation of search space. This method could reach to minimum objective function through a better path, and also could minimize the drag coefficient faster for different Reynolds number regimes. It was found that GA is a powerful method for such multi-dimensional, multi-modal and nonlinear objective function.

  12. Design and Implementation of an On-Chip Patient-Specific Closed-Loop Seizure Onset and Termination Detection System.

    PubMed

    Zhang, Chen; Bin Altaf, Muhammad Awais; Yoo, Jerald

    2016-07-01

    This paper presents the design of an area- and energy-efficient closed-loop machine learning-based patient-specific seizure onset and termination detection algorithm, and its on-chip hardware implementation. Application- and scenario-based tradeoffs are compared and reviewed for seizure detection and suppression algorithm and system which comprises electroencephalography (EEG) data acquisition, feature extraction, classification, and stimulation. Support vector machine achieves a good tradeoff among power, area, patient specificity, latency, and classification accuracy for long-term monitoring of patients with limited training seizure patterns. Design challenges of EEG data acquisition on a multichannel wearable environment for a patch-type sensor are also discussed in detail. Dual-detector architecture incorporates two area-efficient linear support vector machine classifiers along with a weight-and-average algorithm to target high sensitivity and good specificity at once. On-chip implementation issues for a patient-specific transcranial electrical stimulation are also discussed. The system design is verified using CHB-MIT EEG database [1] with a comprehensive measurement criteria which achieves high sensitivity and specificity of 95.1% and 96.2%, respectively, with a small latency of 1 s. It also achieves seizure onset and termination detection delay of 2.98 and 3.82 s, respectively, with seizure length estimation error of 4.07 s.

  13. A Computer Environment for Beginners' Learning of Sorting Algorithms: Design and Pilot Evaluation

    ERIC Educational Resources Information Center

    Kordaki, M.; Miatidis, M.; Kapsampelis, G.

    2008-01-01

    This paper presents the design, features and pilot evaluation study of a web-based environment--the SORTING environment--for the learning of sorting algorithms by secondary level education students. The design of this environment is based on modeling methodology, taking into account modern constructivist and social theories of learning while at…

  14. Application of Modified Flower Pollination Algorithm on Mechanical Engineering Design Problem

    NASA Astrophysics Data System (ADS)

    Kok Meng, Ong; Pauline, Ong; Chee Kiong, Sia; Wahab, Hanani Abdul; Jafferi, Noormaziah

    2017-01-01

    The aim of the optimization is to obtain the best solution among other solutions in order to achieve the objective of the problem without evaluation on all possible solutions. In this study, an improved flower pollination algorithm, namely, the Modified Flower Pollination Algorithms (MFPA) is developed. Comprising of the elements of chaos theory, frog leaping local search and adaptive inertia weight, the performance of MFPA is evaluated in optimizing five benchmark mechanical engineering design problems - tubular column design, speed reducer, gear train, tension/compression spring design and pressure vessel. The obtained results are listed and compared with the results of the other state-of-art algorithms. Assessment shows that the MFPA gives promising result in finding the optimal design for all considered mechanical engineering problems.

  15. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  16. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  17. On Polymorphic Circuits and Their Design Using Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.

  18. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm.

    PubMed

    Chang, Wei-Der

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter.

  19. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm

    PubMed Central

    Chang, Wei-Der

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168

  20. Co-design of software and hardware to implement remote sensing algorithms

    SciTech Connect

    Theiler, J. P.; Frigo, J.; Gokhale, M.; Szymanski, J. J.

    2001-01-01

    Both for offline searches through large data archives and for onboard computation at the sensor head, there is a growing need for ever-more rapid processing of remote sensing data. For many algorithms of use in remote sensing, the bulk of the processing takes place in an 'inner loop' with a large number of simple operations. For these algorithms, dramatic speedups can often be obtained with specialized hardware. The difficulty and expense of digital design continues to limit applicability of this approach, but the development of new design tools is making this approach more feasible, and some notable successes have been reported. On the other hand, it is often the case that processing can also be accelerated by adopting a more sophisticated algorithm design. Unfortunately, a more sophisticated algorithm is much harder to implement in hardware, so these approaches are often at odds with each other. With careful planning, however, it is sometimes possible to combine software and hardware design in such a way that each complements the other, and the final implementation achieves speedup that would not have been possible with a hardware-only or a software-only solution. We will in particular discuss the co-design of software and hardware to achieve substantial speedup of algorithms for multispectral image segmentation and for endmember identification.

  1. Optimum design of antennas using metamaterials with the efficient global optimization (EGO) algorithm

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Teresa H.; Derov, John S.

    2010-04-01

    EGO is an evolutionary, data-adaptive algorithm which can be useful for optimization problems with expensive cost functions. Many antenna design problems qualify since complex computational electromagnetics (CEM) simulations can take significant resources. This makes evolutionary algorithms such as genetic algorithms (GA) or particle swarm optimization (PSO) problematic since iterations of large populations are required. In this paper we discuss multiparameter optimization of a wideband, single-element antenna over a metamaterial ground plane and the interfacing of EGO (optimization) with a full-wave CEM simulation (cost function evaluation).

  2. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  3. Developing Benthic Class Specific, Chlorophyll-a Retrieving Algorithms for Optically-Shallow Water Using SeaWiFS

    PubMed Central

    Blakey, Tara; Melesse, Assefa; Sukop, Michael C.; Tachiev, Georgio; Whitman, Dean; Miralles-Wilhelm, Fernando

    2016-01-01

    This study evaluated the ability to improve Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) chl-a retrieval from optically shallow coastal waters by applying algorithms specific to the pixels’ benthic class. The form of the Ocean Color (OC) algorithm was assumed for this study. The operational atmospheric correction producing Level 2 SeaWiFS data was retained since the focus of this study was on establishing the benefit from the alternative specification of the bio-optical algorithm. Benthic class was determined through satellite image-based classification methods. Accuracy of the chl-a algorithms evaluated was determined through comparison with coincident in situ measurements of chl-a. The regionally-tuned models that were allowed to vary by benthic class produced more accurate estimates of chl-a than the single, unified regionally-tuned model. Mean absolute percent difference was approximately 70% for the regionally-tuned, benthic class-specific algorithms. Evaluation of the residuals indicated the potential for further improvement to chl-a estimation through finer characterization of benthic environments. Atmospheric correction procedures specialized to coastal environments were recognized as areas for future improvement as these procedures would improve both classification and algorithm tuning. PMID:27775626

  4. An integrated in silico approach to design specific inhibitors targeting human poly(a)-specific ribonuclease.

    PubMed

    Vlachakis, Dimitrios; Pavlopoulou, Athanasia; Tsiliki, Georgia; Komiotis, Dimitri; Stathopoulos, Constantinos; Balatsos, Nikolaos A A; Kossida, Sophia

    2012-01-01

    Poly(A)-specific ribonuclease (PARN) is an exoribonuclease/deadenylase that degrades 3'-end poly(A) tails in almost all eukaryotic organisms. Much of the biochemical and structural information on PARN comes from the human enzyme. However, the existence of PARN all along the eukaryotic evolutionary ladder requires further and thorough investigation. Although the complete structure of the full-length human PARN, as well as several aspects of the catalytic mechanism still remain elusive, many previous studies indicate that PARN can be used as potent and promising anti-cancer target. In the present study, we attempt to complement the existing structural information on PARN with in-depth bioinformatics analyses, in order to get a hologram of the molecular evolution of PARNs active site. In an effort to draw an outline, which allows specific drug design targeting PARN, an unequivocally specific platform was designed for the development of selective modulators focusing on the unique structural and catalytic features of the enzyme. Extensive phylogenetic analysis based on all the publicly available genomes indicated a broad distribution for PARN across eukaryotic species and revealed structurally important amino acids which could be assigned as potentially strong contributors to the regulation of the catalytic mechanism of PARN. Based on the above, we propose a comprehensive in silico model for the PARN's catalytic mechanism and moreover, we developed a 3D pharmacophore model, which was subsequently used for the introduction of DNP-poly(A) amphipathic substrate analog as a potential inhibitor of PARN. Indeed, biochemical analysis revealed that DNP-poly(A) inhibits PARN competitively. Our approach provides an efficient integrated platform for the rational design of pharmacophore models as well as novel modulators of PARN with therapeutic potential.

  5. Field programmable gate array based parallel strapdown algorithm design for strapdown inertial navigation systems.

    PubMed

    Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua

    2011-01-01

    A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform.

  6. Cloning a neutral protease of Clostridium histolyticum, determining its substrate specificity, and designing a specific substrate.

    PubMed

    Maeda, Hiroshi; Nakagawa, Kanako; Murayama, Kazutaka; Goto, Masafumi; Watanabe, Kimiko; Takeuchi, Michio; Yamagata, Youhei

    2015-12-01

    Islet transplantation is a prospective treatment for restoring normoglycemia in patients with type 1 diabetes. Islet isolation from pancreases by decomposition with proteolytic enzymes is necessary for transplantation. Two collagenases, collagenase class I (ColG) and collagenase class II (ColH), from Clostridium histolyticum have been used for islet isolation. Neutral proteases have been added to the collagenases for human islet isolation. A neutral protease from C. histolyticum (NP) and thermolysin from Bacillus thermoproteolyicus has been used for the purpose. Thermolysin is an extensively studied enzyme, but NP is not well known. We therefore cloned the gene encoding NP and constructed a Bacillus subtilis overexpression strain. The expressed enzyme was purified, and its substrate specificity was examined. We observed that the substrate specificity of NP was higher than that of thermolysin, and that the protein digestion activities of NP, as determined by colorimetric methods, were lower than those of thermolysin. It seems that decomposition using NP does not negatively affect islets during islet preparation from pancreases. Furthermore, we designed a novel substrate that allows the measurement of NP activity specifically in the enzyme mixture for islet preparation and the culture broth of C. histolyticum. The activity of NP can also be monitored during islet isolation. We hope the purified enzyme and this specific substrate contribute to the optimization of islet isolation from pancreases and that it leads to the success of islet transplantation and the improvement of the quality of life (QOL) for diabetic patients.

  7. The GLAS Science Algorithm Software (GSAS) Detailed Design Document Version 6. Volume 16

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document describes the detailed design of GLAS Science Algorithm Software (GSAS). The GSAS is used to create the ICESat GLAS standard data products. The National Snow and Ice Data Center (NSDIC) distribute these products. The document contains descriptions, flow charts, data flow diagrams, and structure charts for each major component of the GSAS. The purpose of this document is to present the detailed design of the GSAS. It is intended as a reference source to assist the maintenance programmer in making changes that fix or enhance the documented software.

  8. The Design of Flux-Corrected Transport (FCT) Algorithms for Structured Grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. This chapter confines itself to the design of FCT algorithms for structured grids, using a finite volume formalism, for this is the area with which the present author is most familiar. The reader will find excellent material on the design of FCT algorithms for unstructured grids, using both finite volume and finite element formalisms, in the chapters by Professors Löhner, Baum, Kuzmin, Turek, and Möller in the present volume.

  9. A drug-specific nanocarrier design for efficient anticancer therapy

    PubMed Central

    Shi, Changying; Guo, Dandan; Xiao, Kai; Wang, Xu; Wang, Lili; Luo, Juntao

    2015-01-01

    The drug-loading properties of nanocarriers depend on the chemical structures and properties of their building blocks. Here, we customize telodendrimers (linear-dendritic copolymer) to design a nanocarrier with improved in vivo drug delivery characteristics. We do a virtual screen of a library of small molecules to identify the optimal building blocks for precise telodendrimer synthesis using peptide chemistry. With rationally designed telodendrimer architectures, we then optimize the drug binding affinity of a nanocarrier by introducing an optimal drug-binding molecule (DBM) without sacrificing the stability of the nanocarrier. To validate the computational predictions, we synthesize a series of nanocarriers and evaluate systematically for doxorubicin delivery. Rhein-containing nanocarriers have sustained drug release, prolonged circulation, increased tolerated dose, reduced toxicity, effective tumor targeting and superior anticancer effects owing to favourable doxorubicin-binding affinity and improved nanoparticle stability. This study demonstrates the feasibility and versatility of the de novo design of telodendrimer nanocarriers for specific drug molecules, which is a promising approach to transform nanocarrier development for drug delivery. PMID:26158623

  10. A drug-specific nanocarrier design for efficient anticancer therapy

    NASA Astrophysics Data System (ADS)

    Shi, Changying; Guo, Dandan; Xiao, Kai; Wang, Xu; Wang, Lili; Luo, Juntao

    2015-07-01

    The drug-loading properties of nanocarriers depend on the chemical structures and properties of their building blocks. Here we customize telodendrimers (linear dendritic copolymer) to design a nanocarrier with improved in vivo drug delivery characteristics. We do a virtual screen of a library of small molecules to identify the optimal building blocks for precise telodendrimer synthesis using peptide chemistry. With rationally designed telodendrimer architectures, we then optimize the drug-binding affinity of a nanocarrier by introducing an optimal drug-binding molecule (DBM) without sacrificing the stability of the nanocarrier. To validate the computational predictions, we synthesize a series of nanocarriers and evaluate systematically for doxorubicin delivery. Rhein-containing nanocarriers have sustained drug release, prolonged circulation, increased tolerated dose, reduced toxicity, effective tumour targeting and superior anticancer effects owing to favourable doxorubicin-binding affinity and improved nanoparticle stability. This study demonstrates the feasibility and versatility of the de novo design of telodendrimer nanocarriers for specific drug molecules, which is a promising approach to transform nanocarrier development for drug delivery.

  11. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  12. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design

    PubMed Central

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-01-01

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061

  13. Design and experimental evaluation of flexible manipulator control algorithms

    SciTech Connect

    Kwon, D.S.; Hwang, D.H.; Babcock, S.M.; Kress, R.L.; Lew, J.Y.; Evans, M.S.

    1995-04-01

    Within the Environmental Restoration and Waste Management Program of the US Department of Energy, the remediation of single-shell radioactive waste storage tanks is one of the areas that challenge state-of-the-art equipment and methods. The use of long-reach manipulators is being seriously considered for this task. Because of high payload capacity and high length-to-cross-section ratio requirements, these long-reach manipulator systems are expected to use hydraulic actuators and to exhibit significant structural flexibility. The controller has been designed to compensate for the hydraulic actuator dynamics by using a load-compensated velocity feedforward loop and to increase the bandwidth by using an inner pressure feedback loop. Shaping filter techniques have been applied as feedforward controllers to avoid structural vibrations during operation. Various types of shaping filter methods have been investigated. Among them, a new approach, referred to as a ``feedforward simulation filter`` that uses embedded simulation, has been presented.

  14. Evolutionary algorithm for the neutrino factory front end design

    SciTech Connect

    Poklonskiy, Alexey A.; Neuffer, David; /Fermilab

    2009-01-01

    The Neutrino Factory is an important tool in the long-term neutrino physics program. Substantial effort is put internationally into designing this facility in order to achieve desired performance within the allotted budget. This accelerator is a secondary beam machine: neutrinos are produced by means of the decay of muons. Muons, in turn, are produced by the decay of pions, produced by hitting the target by a beam of accelerated protons suitable for acceleration. Due to the physics of this process, extra conditioning of the pion beam coming from the target is needed in order to effectively perform subsequent acceleration. The subsystem of the Neutrino Factory that performs this conditioning is called Front End, its main performance characteristic is the number of the produced muons.

  15. Nuclear Electric Vehicle Optimization Toolset (NEVOT): Integrated System Design Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg

    2003-01-01

    The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.

  16. EMILiO: a fast algorithm for genome-scale strain design.

    PubMed

    Yang, Laurence; Cluett, William R; Mahadevan, Radhakrishnan

    2011-05-01

    Systems-level design of cell metabolism is becoming increasingly important for renewable production of fuels, chemicals, and drugs. Computational models are improving in the accuracy and scope of predictions, but are also growing in complexity. Consequently, efficient and scalable algorithms are increasingly important for strain design. Previous algorithms helped to consolidate the utility of computational modeling in this field. To meet intensifying demands for high-performance strains, both the number and variety of genetic manipulations involved in strain construction are increasing. Existing algorithms have experienced combinatorial increases in computational complexity when applied toward the design of such complex strains. Here, we present EMILiO, a new algorithm that increases the scope of strain design to include reactions with individually optimized fluxes. Unlike existing approaches that would experience an explosion in complexity to solve this problem, we efficiently generated numerous alternate strain designs producing succinate, l-glutamate and l-serine. This was enabled by successive linear programming, a technique new to the area of computational strain design.

  17. NFLUX PRE: Validation of New Specific Humidity, Surface Air Temperature, and Wind Speed Algorithms for Ascending/Descending Directions and Clear or Cloudy Conditions

    DTIC Science & Technology

    2015-06-18

    Validation of New Specific Humidity, Surface Air Temperature , and Wind Speed Algorithms for Ascending/ Descending Directions and Clear or Cloudy...LIMITATION OF ABSTRACT NFLUX PRE: Validation of New Specific Humidity, Surface Air Temperature , and Wind Speed Algorithms for Ascending/Descending...satellite retrieval algorithms. In addition to data from the Special Sensor Microwave Imager/Sounder (SSMIS) and the Advanced Microwave Sounding

  18. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  19. An overview of field specific designs of microbial EOR

    SciTech Connect

    Robertson, E.P.; Bala, G.A.; Fox, S.L.; Jackson, J.D.; Thomas, C.P.

    1995-12-01

    The selection and design of a microbial enhanced oil recovery (MEOR) process for application in a specific field involves geological, reservoir, and biological characterization. Microbially mediated oil recovery mechanisms (biogenic gas, biopolymers, and biosurfactants) are defined by the types of microorganisms used. The engineering and biological character of a given reservoir must be understood to correctly select a microbial system to enhance oil recovery. The objective of this paper is to discuss the methods used to evaluate three fields with distinct characteristics and production problems for the applicability of MEOR technology. Reservoir characteristics and laboratory results indicated that MEOR would not be applicable in two of the three fields considered. The development of a microbial oil recovery process for the third field appeared promising. Development of a bacterial consortium capable of producing the desired metabolites was initiated and field isolates were characterized.

  20. Novel Designs for Application Specific MEMS Pressure Sensors

    PubMed Central

    Fragiacomo, Giulio; Reck, Kasper; Lorenzen, Lasse; Thomsen, Erik V.

    2010-01-01

    In the framework of developing innovative microfabricated pressure sensors, we present here three designs based on different readout principles, each one tailored for a specific application. A touch mode capacitive pressure sensor with high sensitivity (14 pF/bar), low temperature dependence and high capacitive output signal (more than 100 pF) is depicted. An optical pressure sensor intrinsically immune to electromagnetic interference, with large pressure range (0–350 bar) and a sensitivity of 1 pm/bar is presented. Finally, a resonating wireless pressure sensor power source free with a sensitivity of 650 KHz/mmHg is described. These sensors will be related with their applications in harsh environment, distributed systems and medical environment, respectively. For many aspects, commercially available sensors, which in vast majority are piezoresistive, are not suited for the applications proposed. PMID:22163425

  1. An overview of field-specific designs of microbial EOR

    SciTech Connect

    Robertson, E.P.; Bala, G.A.; Fox, S.L.; Jackson, J.D.; Thomas, C.P.

    1995-12-31

    The selection and design of an MEOR process for application in a specific field involves geological, reservoir, and biological characterization. Microbially mediated oil recovery mechanisms (bigenic gas, biopolymers, and biosurfactants) are defined by the types of microorganisms used. The engineering and biological character of a given reservoir must be understood to correctly select a microbial system to enhance oil recovery. This paper discusses the methods used to evaluate three fields with distinct characteristics and production problems for the applicability of MEOR would not be applicable in two of the three fields considered. The development of a microbial oil recovery process for the third field appeared promising. Development of a bacterial consortium capable of producing the desired metabolites was initiated, and field isolates were characterized.

  2. Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft

    PubMed Central

    Ning, Xin; Yuan, Jianping; Yue, Xiaokui

    2016-01-01

    A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755

  3. Directional design of optical lens based on metallic nano-slits by Yang-Gu algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Qiaofen; Zhang, Yan

    2010-11-01

    Directional design of optical lenses based on metallic nano-slits that can focus light in different style by Yang-Gu (YG) algorithm. Both of the relative phase and amplitude of emitting light scattered by surface plasmon in a single subwavelength slit and modulated by the width of the slit or the thickness of the lens of the lens have been considered in the design processing. A form of the YG algorithm which considers both the phase and amplitude changing is derived. Two kinds of nanolenses are designed by this numerical method, one with one focal spot, and another with two focal spots in one focal plane. According to the finite-different time-domain (FDTD) method numerical calculation, it is found that the functions of the designed lenses agree well with preassigned goal. This method may be useful to design subwavelength optical devices that can be integrated into other optical and optoelectronic elements.

  4. DITDOS: A set of design specifications for distributed data inventories

    NASA Technical Reports Server (NTRS)

    King, T. A.; Walker, R. J.; Joy, S. P.

    1995-01-01

    The analysis of space science data often requires researchers to work with many different types of data. For instance, correlative analysis can require data from multiple instruments on a single spacecraft, multiple spacecraft, and ground-based data. Typically, data from each source are available in a different format and have been written on a different type of computer, and so much effort must be spent to read the data and convert it to the computer and format that the researchers use in their analysis. The large and ever-growing amount of data and the large investment by the scientific community in software that require a specific data format make using standard data formats impractical. A format-independent approach to accessing and analyzing disparate data is key to being able to deliver data to a diverse community in a timely fashion. The system in use at the Planetary Plasma Interactions (PPI) node of the NASA Planetary Data System (PDS) is based on the object-oriented Distributed Inventory Tracking and Data Ordering Specification (DITDOS), which describes data inventories in a storage independent way. The specifications have been designed to make it possible to build DITDOS compliant inventories that can exist on portable media such as CD-ROM's. The portable media can be moved within a system, or from system to system, and still be used without modification. Several applications have been developed to work with DITDOS compliant data holdings. One is a windows-based client/server application, which helps guide the user in the selection of data. A user can select a data base, then a data set, then a specific data file, and then either order the data and receive it immediately if it is online or request that it be brought online if it is not. A user can also view data by any of the supported methods. DITDOS makes it possible to use already existing applications for data-specific actions, and this is done whenever possible. Another application is a stand

  5. Should dialysis modalities be designed to remove specific uremic toxins?

    PubMed

    Baurmeister, Ulrich; Vienken, Joerg; Ward, Richard A

    2009-01-01

    The definition of optimal dialysis therapy remains elusive. Randomized clinical trials have neither supported using urea as a surrogate marker for uremic toxicity nor provided clear cut evidence in favor of larger solutes. Thus, where to focus resources in the development of new membranes, and therapies remains unclear. Three basic questions remain unanswered: (i) what solute(s) should be used as a marker for optimal dialysis; (ii) should dialytic therapies be designed to remove a specific solute; and (iii) how can current therapies be modified to provide better control of uremic toxicity? Identification of a single, well-defined uremic toxin appears to be unlikely as new analytical tools reveal an increasingly complex uremic milieu. As a result, it is probable that membranes and therapies should be designed for the nonspecific removal of a wide variety of solutes retained in uremia. Removal of the widest range of solutes can best be achieved using existing therapies that incorporate convection in conjunction with longer treatment times and more frequent treatments. Membranes capable of removing solutes over an expanded effective molecular size range can already be fabricated; however, their use will require novel approaches to conserve proteins, such as albumin.

  6. Origins Space Telescope: Telescope Design and Instrument Specifications

    NASA Astrophysics Data System (ADS)

    Meixner, Margaret; Carter, Ruth; Leisawitz, David; Dipirro, Mike; Flores, Anel; Staguhn, Johannes; Kellog, James; Roellig, Thomas L.; Melnick, Gary J.; Bradford, Charles; Wright, Edward L.; Zmuidzinas, Jonas; Origins Space Telescope Study Team

    2017-01-01

    The Origins Space Telescope (OST) is the mission concept for the Far-Infrared Surveyor, one of the four science and technology definition studies of NASA Headquarters for the 2020 Astronomy and Astrophysics Decadal survey. The renaming of the mission reflects Origins science goals that will discover and characterize the most distant galaxies, nearby galaxies and the Milky Way, exoplanets, and the outer reaches of our Solar system. This poster will show the preliminary telescope design that will be a large aperture (>8 m in diameter), cryogenically cooled telescope. We will also present the specifications for the spectrographs and imagers over a potential wavelength range of ~10 microns to 1 millimeter. We look forward to community input into this mission definition over the coming year as we work on the concept design for the mission. Origins will enable flagship-quality general observing programs led by the astronomical community in the 2030s. We welcome you to contact the Science and Technology Definition Team (STDT) with your science needs and ideas by emailing us at firsurveyor_info@lists.ipac.caltech.edu.

  7. The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic

    PubMed Central

    Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández

    2015-01-01

    Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412

  8. The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.

    PubMed

    Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente

    2015-08-10

    Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively.

  9. Application-specific coarse-grained reconfigurable array: architecture and design methodology

    NASA Astrophysics Data System (ADS)

    Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu

    2015-06-01

    Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.

  10. Validation of space/ground antenna control algorithms using a computer-aided design tool

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1995-01-01

    The validation of the algorithms for controlling the space-to-ground antenna subsystem for Space Station Alpha is an important step in assuring reliable communications. These algorithms have been developed and tested using a simulation environment based on a computer-aided design tool that can provide a time-based execution framework with variable environmental parameters. Our work this summer has involved the exploration of this environment and the documentation of the procedures used to validate these algorithms. We have installed a variety of tools in a laboratory of the Tracking and Communications division for reproducing the simulation experiments carried out on these algorithms to verify that they do meet their requirements for controlling the antenna systems. In this report, we describe the processes used in these simulations and our work in validating the tests used.

  11. The design and results of an algorithm for intelligent ground vehicles

    NASA Astrophysics Data System (ADS)

    Duncan, Matthew; Milam, Justin; Tote, Caleb; Riggins, Robert N.

    2010-01-01

    This paper addresses the design, design method, test platform, and test results of an algorithm used in autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its 2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this autonomous robotic system, beginning with the design process and ending with test results for both simulation and real environments.

  12. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures

    PubMed Central

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  13. Genetic algorithms in conceptual design of a light-weight, low-noise, tilt-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Wells, Valana L.

    1996-01-01

    This report outlines research accomplishments in the area of using genetic algorithms (GA) for the design and optimization of rotorcraft. It discusses the genetic algorithm as a search and optimization tool, outlines a procedure for using the GA in the conceptual design of helicopters, and applies the GA method to the acoustic design of rotors.

  14. [In Silico Drug Design Using an Evolutionary Algorithm and Compound Database].

    PubMed

    Kawai, Kentaro; Takahashi, Yoshimasa

    2016-01-01

      Computational drug design plays an important role in the discovery of new drugs. Recently, we proposed an algorithm for designing new drug-like molecules utilizing the structure of a known active molecule. To design molecules, three types of fragments (ring, linker, and side-chain fragments) were defined as building blocks, and a fragment library was prepared from molecules listed in G protein-coupled receptor (GPCR)-SARfari database. An evolutionary algorithm which executes evolutionary operations, such as crossover, mutation, and selection, was implemented to evolve the molecules. As a case study, some GPCRs were selected for computational experiments in which we tried to design ligands from simple seed fragments using the Tanimoto coefficient as a fitness function. The results showed that the algorithm could be used successfully to design new molecules with structural similarity, scaffold variety, and chemical validity. In addition, a docking study revealed that these designed molecules also exhibited shape complementarity with the binding site of the target protein. Therefore, this is expected to become a powerful tool for designing new drug-like molecules in drug discovery projects.

  15. Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven

    2010-05-01

    Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.

  16. A Survey of Mathematical Optimization Models and Algorithms for Designing and Extending Irrigation and Wastewater Networks

    NASA Astrophysics Data System (ADS)

    Mandl, Christoph E.

    1981-08-01

    This paper presents a state of the art survey of network models and algorithms that can be used as planning tools in irrigation and wastewater systems. It is shown that the problem of designing or extending such systems basically leads to the same type of mathematical optimization model. The difficulty in solving this model lies mainly in the properties of the objective function. Trying to minimize construction and/or operating costs of a system typically results in a concave cost (objective) function due to economies of scale. A number of ways to attack such models are discussed and compared, including linear programing, integer programing, and specially designed exact and heuristic algorithms. The usefulness of each approach is evaluated in terms of the validity of the model, the computational complexity of the algorithm, the properties of the solution, the availability of software, and the capability for sensitivity analysis.

  17. Iterative Fourier transform algorithm with regularization for the optimal design of diffractive optical elements.

    PubMed

    Kim, Hwi; Yang, Byungchoon; Lee, Byoungho

    2004-12-01

    There is a trade-off between uniformity and diffraction efficiency in the design of diffractive optical elements. It is caused by the inherent ill-posedness of the design problem itself. For the optimal design, the optimum trade-off needs to be obtained. The trade-off between uniformity and diffraction efficiency in the design of diffractive optical elements is theoretically investigated based on the Tikhonov regularization theory. A novel scheme of an iterative Fourier transform algorithm with regularization to obtain the optimum trade-off is proposed.

  18. Design of binary diffractive microlenses with subwavelength structures using the genetic algorithm.

    PubMed

    Shirakawa, Tatsuya; Ishikawa, Kenichi L; Suzuki, Shuichi; Yamada, Yasufumi; Takahashi, Hiroyuki

    2010-04-12

    We present a method to design binary diffractive microlenses with subwavelength structures, based on the finite-difference time-domain method and the genetic algorithm, also accounting for limitations on feature size and aspect ratio imposed by fabrication. The focusing efficiency of the microlens designed by this method is close to that of the convex lens and much higher than that of the binary Fresnel lens designed by a previous method. Although the optimized structure appears to be a binary Fresnel lens qualitatively, it is hard to quantitatively derive directly from the convex Fresnel lens. The design of a microlens with reduced chromatic aberration is also presented.

  19. Multidisciplinary Design, Analysis, and Optimization Tool Development using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2008-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space A dministration Dryden Flight Research Center to automate analysis and design process by leveraging existing tools such as NASTRAN, ZAERO a nd CFD codes to enable true multidisciplinary optimization in the pr eliminary design stage of subsonic, transonic, supersonic, and hypers onic aircraft. This is a promising technology, but faces many challe nges in large-scale, real-world application. This paper describes cur rent approaches, recent results, and challenges for MDAO as demonstr ated by our experience with the Ikhana fire pod design.

  20. Application of an evolutionary algorithm in the optimal design of micro-sensor.

    PubMed

    Lu, Qibing; Wang, Pan; Guo, Sihai; Sheng, Buyun; Liu, Xingxing; Fan, Zhun

    2015-01-01

    This paper introduces an automatic bond graph design method based on genetic programming for the evolutionary design of micro-resonator. First, the system-level behavioral model is discussed, which based on genetic programming and bond graph. Then, the geometry parameters of components are automatically optimized, by using the genetic algorithm with constraints. To illustrate this approach, a typical device micro-resonator is designed as an example in biomedicine. This paper provides a new idea for the automatic optimization design of biomedical sensors by evolutionary calculation.

  1. The Impact of Critical Thinking and Logico-Mathematical Intelligence on Algorithmic Design Skills

    ERIC Educational Resources Information Center

    Korkmaz, Ozgen

    2012-01-01

    The present study aims to reveal the impact of students' critical thinking and logico-mathematical intelligence levels of students on their algorithm design skills. This research was a descriptive study and carried out by survey methods. The sample consisted of 45 first-year educational faculty undergraduate students. The data was collected by…

  2. A hybrid of genetic algorithm and particle swarm optimization for recurrent network design.

    PubMed

    Juang, Chia-Feng

    2004-04-01

    An evolutionary recurrent network which automates the design of recurrent neural/fuzzy networks using a new evolutionary learning algorithm is proposed in this paper. This new evolutionary learning algorithm is based on a hybrid of genetic algorithm (GA) and particle swarm optimization (PSO), and is thus called HGAPSO. In HGAPSO, individuals in a new generation are created, not only by crossover and mutation operation as in GA, but also by PSO. The concept of elite strategy is adopted in HGAPSO, where the upper-half of the best-performing individuals in a population are regarded as elites. However, instead of being reproduced directly to the next generation, these elites are first enhanced. The group constituted by the elites is regarded as a swarm, and each elite corresponds to a particle within it. In this regard, the elites are enhanced by PSO, an operation which mimics the maturing phenomenon in nature. These enhanced elites constitute half of the population in the new generation, whereas the other half is generated by performing crossover and mutation operation on these enhanced elites. HGAPSO is applied to recurrent neural/fuzzy network design as follows. For recurrent neural network, a fully connected recurrent neural network is designed and applied to a temporal sequence production problem. For recurrent fuzzy network design, a Takagi-Sugeno-Kang-type recurrent fuzzy network is designed and applied to dynamic plant control. The performance of HGAPSO is compared to both GA and PSO in these recurrent networks design problems, demonstrating its superiority.

  3. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    PubMed

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively.

  4. Revisiting software specification and design for large astronomy projects

    NASA Astrophysics Data System (ADS)

    Wiant, Scott; Berukoff, Steven

    2016-07-01

    The separation of science and engineering in the delivery of software systems overlooks the true nature of the problem being solved and the organization that will solve it. Use of a systems engineering approach to managing the requirements flow between these two groups as between a customer and contractor has been used with varying degrees of success by well-known entities such as the U.S. Department of Defense. However, treating science as the customer and engineering as the contractor fosters unfavorable consequences that can be avoided and opportunities that are missed. For example, the "problem" being solved is only partially specified through the requirements generation process since it focuses on detailed specification guiding the parties to a technical solution. Equally important is the portion of the problem that will be solved through the definition of processes and staff interacting through them. This interchange between people and processes is often underrepresented and under appreciated. By concentrating on the full problem and collaborating on a strategy for its solution a science-implementing organization can realize the benefits of driving towards common goals (not just requirements) and a cohesive solution to the entire problem. The initial phase of any project when well executed is often the most difficult yet most critical and thus it is essential to employ a methodology that reinforces collaboration and leverages the full suite of capabilities within the team. This paper describes an integrated approach to specifying the needs induced by a problem and the design of its solution.

  5. Measurement of Spray Drift with a Specifically Designed Lidar System.

    PubMed

    Gregorio, Eduard; Torrent, Xavier; Planas de Martí, Santiago; Solanelles, Francesc; Sanz, Ricardo; Rocadenbosch, Francesc; Masip, Joan; Ribes-Dasi, Manel; Rosell-Polo, Joan R

    2016-04-08

    Field measurements of spray drift are usually carried out by passive collectors and tracers. However, these methods are labour- and time-intensive and only provide point- and time-integrated measurements. Unlike these methods, the light detection and ranging (lidar) technique allows real-time measurements, obtaining information with temporal and spatial resolution. Recently, the authors have developed the first eye-safe lidar system specifically designed for spray drift monitoring. This prototype is based on a 1534 nm erbium-doped glass laser and an 80 mm diameter telescope, has scanning capability, and is easily transportable. This paper presents the results of the first experimental campaign carried out with this instrument. High coefficients of determination (R² > 0.85) were observed by comparing lidar measurements of the spray drift with those obtained by horizontal collectors. Furthermore, the lidar system allowed an assessment of the drift reduction potential (DRP) when comparing low-drift nozzles with standard ones, resulting in a DRP of 57% (preliminary result) for the tested nozzles. The lidar system was also used for monitoring the evolution of the spray flux over the canopy and to generate 2-D images of these plumes. The developed instrument is an advantageous alternative to passive collectors and opens the possibility of new methods for field measurement of spray drift.

  6. Measurement of Spray Drift with a Specifically Designed Lidar System

    PubMed Central

    Gregorio, Eduard; Torrent, Xavier; Planas de Martí, Santiago; Solanelles, Francesc; Sanz, Ricardo; Rocadenbosch, Francesc; Masip, Joan; Ribes-Dasi, Manel; Rosell-Polo, Joan R.

    2016-01-01

    Field measurements of spray drift are usually carried out by passive collectors and tracers. However, these methods are labour- and time-intensive and only provide point- and time-integrated measurements. Unlike these methods, the light detection and ranging (lidar) technique allows real-time measurements, obtaining information with temporal and spatial resolution. Recently, the authors have developed the first eye-safe lidar system specifically designed for spray drift monitoring. This prototype is based on a 1534 nm erbium-doped glass laser and an 80 mm diameter telescope, has scanning capability, and is easily transportable. This paper presents the results of the first experimental campaign carried out with this instrument. High coefficients of determination (R2 > 0.85) were observed by comparing lidar measurements of the spray drift with those obtained by horizontal collectors. Furthermore, the lidar system allowed an assessment of the drift reduction potential (DRP) when comparing low-drift nozzles with standard ones, resulting in a DRP of 57% (preliminary result) for the tested nozzles. The lidar system was also used for monitoring the evolution of the spray flux over the canopy and to generate 2-D images of these plumes. The developed instrument is an advantageous alternative to passive collectors and opens the possibility of new methods for field measurement of spray drift. PMID:27070613

  7. Novel meta-surface design synthesis via nature-inspired optimization algorithms

    NASA Astrophysics Data System (ADS)

    Bayraktar, Zikri

    Heuristic numerical optimization algorithms have been gaining interest over the years as the computational power of the digital computers increases at an unprecedented level every year. While mature techniques such as the Genetic Algorithm increase their application areas, researchers also try to come up with new algorithms by simply observing the highly tuned processes provided by the nature. In this dissertation, the well-known Genetic Algorithm (GA) will be utilized to tackle various novel electromagnetic optimization problems, along with parallel implementation of the Clonal Selection Algorithm (CLONALG) and newly introduced the Wind Driven Optimization (WDO) technique. The utility of the CLONALG parallelization and the efficiency of the WDO will be illustrated by applying them to multi-dimensional and multi-modal electromagnetics problems such as antenna design and metamaterial surface synthesis. One of the metamaterial application areas is the design synthesis of 90 degrees rotationally symmetric ultra-small unit cell artificial magnetic conducting (AMC) surfaces. AMCs are composite metallo-dielectric structures designed to behave as perfect magnetic conductors (PMC) over a certain frequency range, those exhibit a reflection coefficient magnitude of unity with an phase angle of zero degrees at the center of the band. The proposed designs consist of ultra small sized frequency selective surface (FSS) unit cells that are tightly packed and highly intertwined, yet achieve remarkable AMC band performance and field of view when compared to current state-of-the-art AMCs. In addition, planar double-sided AMC (DSAMC) structures are introduced and optimized as AMC ground planes for low profile antennas in composite platforms and separator slabs for vertical antenna applications. The proposed designs do not possess complete metallic ground planes, which makes them ideal for composite and multi-antenna systems. The versatility of the DSAMC slabs is also illustrated

  8. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  9. A surgeon specific automatic path planning algorithm for deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Dawant, Benoit M.; Pallavaram, Srivatsan; Neimat, Joseph S.; Konrad, Peter E.; D'Haese, Pierre-Francois; Datteri, Ryan D.; Landman, Bennett A.; Noble, Jack H.

    2012-02-01

    In deep brain stimulation surgeries, stimulating electrodes are placed at specific targets in the deep brain to treat neurological disorders. Reaching these targets safely requires avoiding critical structures in the brain. Meticulous planning is required to find a safe path from the cortical surface to the intended target. Choosing a trajectory automatically is difficult because there is little consensus among neurosurgeons on what is optimal. Our goals are to design a path planning system that is able to learn the preferences of individual surgeons and, eventually, to standardize the surgical approach using this learned information. In this work, we take the first step towards these goals, which is to develop a trajectory planning approach that is able to effectively mimic individual surgeons and is designed such that parameters, which potentially can be automatically learned, are used to describe an individual surgeon's preferences. To validate the approach, two neurosurgeons were asked to choose between their manual and a computed trajectory, blinded to their identity. The results of this experiment showed that the neurosurgeons preferred the computed trajectory over their own in 10 out of 40 cases. The computed trajectory was judged to be equivalent to the manual one or otherwise acceptable in 27 of the remaining cases. These results demonstrate the potential clinical utility of computer-assisted path planning.

  10. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  11. Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design

    NASA Technical Reports Server (NTRS)

    Whorton, Mark; Buschek, Harald; Calise, Anthony J.

    1996-01-01

    Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.

  12. A firefly algorithm for solving competitive location-design problem: a case study

    NASA Astrophysics Data System (ADS)

    Sadjadi, Seyed Jafar; Ashtiani, Milad Gorji; Ramezanian, Reza; Makui, Ahmad

    2016-07-01

    This paper aims at determining the optimal number of new facilities besides specifying both the optimal location and design level of them under the budget constraint in a competitive environment by a novel hybrid continuous and discrete firefly algorithm. A real-world application of locating new chain stores in the city of Tehran, Iran, is used and the results are analyzed. In addition, several examples have been solved to evaluate the efficiency of the proposed model and algorithm. The results demonstrate that the performed method provides good-quality results for the test problems.

  13. Sizing of complex structure by the integration of several different optimal design algorithms

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1974-01-01

    Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.

  14. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  15. Designing Adiabatic Radio Frequency Pulses Using the Shinnar–Le Roux Algorithm

    PubMed Central

    Balchandani, Priti; Pauly, John; Spielman, Daniel

    2010-01-01

    Adiabatic pulses are a special class of radio frequency (RF) pulses that may be used to achieve uniform flip angles in the presence of a nonuniform B1 field. In this work, we present a new, systematic method for designing high-bandwidth (BW), low-peak-amplitude adiabatic RF pulses that utilizes the Shinnar–Le Roux (SLR) algorithm for pulse design. Currently, the SLR algorithm is extensively employed to design nonadiabatic pulses for use in magnetic resonance imaging and spectroscopy. We have adapted the SLR algorithm to create RF pulses that also satisfy the adiabatic condition. By overlaying sufficient quadratic phase across the spectral profile before the inverse SLR transform, we generate RF pulses that exhibit the required spectral characteristics and adiabatic behavior. Application of quadratic phase also distributes the RF energy more uniformly, making it possible to obtain the same spectral BW with lower RF peak amplitude. The method enables the pulse designer to specify spectral profile parameters and the degree of quadratic phase before pulse generation. Simulations and phantom experiments demonstrate that RF pulses designed using this new method behave adiabatically. PMID:20806378

  16. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  17. A general design algorithm for low optical loss adiabatic connections in waveguides.

    PubMed

    Chen, Tong; Lee, Hansuek; Li, Jiang; Vahala, Kerry J

    2012-09-24

    Single-mode waveguide designs frequently support higher order transverse modes, usually as a consequence of process limitations such as lithography. In these systems, it is important to minimize coupling to higher-order modes so that the system nonetheless behaves single mode. We propose a variational approach to design adiabatic waveguide connections with minimal intermodal coupling. An application of this algorithm in designing the "S-bend" of a whispering-gallery spiral waveguide is demonstrated with approximately 0.05 dB insertion loss. Compared to other approaches, our algorithm requires less fabrication resolution and is able to minimize the transition loss over a broadband spectrum. The method can be applied to a wide range of turns and connections and has the advantage of handling connections with arbitrary boundary conditions.

  18. Genetic algorithm optimization in drug design QSAR: Bayesian-regularized genetic neural networks (BRGNN) and genetic algorithm-optimized support vectors machines (GA-SVM).

    PubMed

    Fernandez, Michael; Caballero, Julio; Fernandez, Leyden; Sarai, Akinori

    2011-02-01

    Many articles in "in silico" drug design implemented genetic algorithm (GA) for feature selection, model optimization, conformational search, or docking studies. Some of these articles described GA applications to quantitative structure-activity relationships (QSAR) modeling in combination with regression and/or classification techniques. We reviewed the implementation of GA in drug design QSAR and specifically its performance in the optimization of robust mathematical models such as Bayesian-regularized artificial neural networks (BRANNs) and support vector machines (SVMs) on different drug design problems. Modeled data sets encompassed ADMET and solubility properties, cancer target inhibitors, acetylcholinesterase inhibitors, HIV-1 protease inhibitors, ion-channel and calcium entry blockers, and antiprotozoan compounds as well as protein classes, functional, and conformational stability data. The GA-optimized predictors were often more accurate and robust than previous published models on the same data sets and explained more than 65% of data variances in validation experiments. In addition, feature selection over large pools of molecular descriptors provided insights into the structural and atomic properties ruling ligand-target interactions.

  19. Design Optimization of an Axial Fan Blade Through Multi-Objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Choi, Jae-Ho; Husain, Afzal; Kim, Kwang-Yong

    2010-06-01

    This paper presents design optimization of an axial fan blade with hybrid multi-objective evolutionary algorithm (hybrid MOEA). Reynolds-averaged Navier-Stokes equations with shear stress transport turbulence model are discretized by the finite volume approximations and solved on hexahedral grids for the flow analyses. The validation of the numerical results was performed with the experimental data for the axial and tangential velocities. Six design variables related to the blade lean angle and blade profile are selected and the Latin hypercube sampling of design of experiments is used to generate design points within the selected design space. Two objective functions namely total efficiency and torque are employed and the multi-objective optimization is carried out to enhance total efficiency and to reduce the torque. The flow analyses are performed numerically at the designed points to obtain values of the objective functions. The Non-dominated Sorting of Genetic Algorithm (NSGA-II) with ɛ -constraint strategy for local search coupled with surrogate model is used for multi-objective optimization. The Pareto-optimal solutions are presented and trade-off analysis is performed between the two competing objectives in view of the design and flow constraints. It is observed that total efficiency is enhanced and torque is decreased as compared to the reference design by the process of multi-objective optimization. The Pareto-optimal solutions are analyzed to understand the mechanism of the improvement in the total efficiency and reduction in torque.

  20. A new design approach based on differential evolution algorithm for geometric optimization of magnetorheological brakes

    NASA Astrophysics Data System (ADS)

    Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung

    2016-12-01

    In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.

  1. The multi-disciplinary design study: A life cycle cost algorithm

    NASA Technical Reports Server (NTRS)

    Harding, R. R.; Pichi, F. J.

    1988-01-01

    The approach and results of a Life Cycle Cost (LCC) analysis of the Space Station Solar Dynamic Power Subsystem (SDPS) including gimbal pointing and power output performance are documented. The Multi-Discipline Design Tool (MDDT) computer program developed during the 1986 study has been modified to include the design, performance, and cost algorithms for the SDPS as described. As with the Space Station structural and control subsystems, the LCC of the SDPS can be computed within the MDDT program as a function of the engineering design variables. Two simple examples of MDDT's capability to evaluate cost sensitivity and design based on LCC are included. MDDT was designed to accept NASA's IMAT computer program data as input so that IMAT's detailed structural and controls design capability can be assessed with expected system LCC as computed by MDDT. No changes to IMAT were required. Detailed knowledge of IMAT is not required to perform the LCC analyses as the interface with IMAT is noninteractive.

  2. A new iterative Fourier transform algorithm for optimal design in holographic optical tweezers

    NASA Astrophysics Data System (ADS)

    Memmolo, P.; Miccio, L.; Merola, F.; Ferraro, P.; Netti, P. A.

    2012-06-01

    We propose a new Iterative Fourier Transform Algorithm (IFTA) capable to suppress ghost traps and noise in Holographic Optical Tweezers (HOT), maintaining a high diffraction efficiency in a computational time comparable with the others iterative algorithms. The process consists in the planning of the suitable ideal target of optical tweezers as input of classical IFTA and we show we are able to design up to 4 real traps, in the field of view imaged by the microscope objective, using an IFTA built on fictitious phasors, located in strategic positions in the Fourier plane. The effectiveness of the proposed algorithm is evaluated both for numerical and optical reconstructions and compared with the other techniques known in literature.

  3. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  4. Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms.

    PubMed

    Huang, Fang; Hartwich, Tobias M P; Rivera-Molina, Felix E; Lin, Yu; Duim, Whitney C; Long, Jane J; Uchil, Pradeep D; Myers, Jordan R; Baird, Michelle A; Mothes, Walther; Davidson, Michael W; Toomre, Derek; Bewersdorf, Joerg

    2013-07-01

    Newly developed scientific complementary metal-oxide semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition, enlarge the field of view and increase the effective quantum efficiency in single-molecule switching nanoscopy. However, sCMOS-intrinsic pixel-dependent readout noise substantially lowers the localization precision and introduces localization artifacts. We present algorithms that overcome these limitations and that provide unbiased, precise localization of single molecules at the theoretical limit. Using these in combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at rates of up to 32 reconstructed images per second in fixed and living cells.

  5. Aspherical lens design using hybrid neural-genetic algorithm of contact lenses.

    PubMed

    Yen, Chih-Ta; Ye, Jhe-Wen

    2015-10-01

    The design of complex contact lenses involves numerous uncertain variables. How to help an optical designer to first design the optimal contact lens to reduce discomfort when wearing a pair of glasses is an essential design concern. This study examined the impact of aberrations on contact lenses to optimize a contact lens design for myopic and astigmatic eyes. In general, two aspherical surfaces can be assembled in an optical system to reduce the overall volume size. However, this design reduces the spherical aberration (SA) values at wide contact radii. The proposed optimization algorithm with optical design can be corrected to improve the SA value and, thus, reduce coma aberration (TCO) values and enhance the modulation transfer function (MTF). This means integrating a modified genetic algorithm (GA) with a neural network (NN) to optimize multiple-quality characteristics, namely the SA, TCO, and MTF, of contact lenses. When the proposed optional weight NN-GA is implemented, the weight values of the fitness function can be varied to adjust system performance. The method simplifies the selection of parameters in the optimization of optical systems. Compared with the traditional CODE V built-in optimal scheme, the proposed scheme is more flexible and intuitive to improve SA, TCO, and MTF values by 50.03%, 45.78%, and 24.7%, respectively.

  6. SU-E-T-305: Study of the Eclipse Electron Monte Carlo Algorithm for Patient Specific MU Calculations

    SciTech Connect

    Wang, X; Qi, S; Agazaryan, N; DeMarco, J

    2014-06-01

    Purpose: To evaluate the Eclipse electron Monte Carlo (eMC) algorithm based on patient specific monitor unit (MU) calculations, and to propose a new factor which quantitatively predicts the discrepancy of MUs between the eMC algorithm and hand calculations. Methods: Electron treatments were planned for 61 patients on Eclipse (Version 10.0) using the eMC algorithm for Varian TrueBeam linear accelerators. For each patient, the same treatment beam angle was kept for a point dose calculation at dmax performed with the reference condition, which used an open beam with a 15×15 cm2 size cone and 100 SSD. A patient specific correction factor (PCF) was obtained by getting the ratio between this point dose and the calibration dose, which is 1 cGy per MU delivered at dmax. The hand calculation results were corrected by the PCFs and compared with MUs from the treatment plans. Results: The MU from the treatment plans were in average (7.1±6.1)% higher than the hand calculations. The average MU difference between the corrected hand calculations and the eMC treatment plans was (0.07±3.48)%. A correlation coefficient of 0.8 was found between (1-PCF) and the percentage difference between the treatment plan and hand calculations. Most outliers were treatment plans with small beam opening (< 4 cm) and low energy beams (6 and 9 MeV). Conclusion: For CT-based patient treatment plans, the eMC algorithm tends to generate a larger MU than hand calculations. Caution should be taken for eMC patient plans with small field sizes and low energy beams. We hypothesize that the PCF ratio reflects the influence of patient surface curvature and tissue inhomogeneity to patient specific percent depth dose (PDD) curve and MU calculations in eMC algorithm.

  7. Digital IIR Filters Design Using Differential Evolution Algorithm with a Controllable Probabilistic Population Size

    PubMed Central

    Zhu, Wu; Fang, Jian-an; Tang, Yang; Zhang, Wenbing; Du, Wei

    2012-01-01

    Design of a digital infinite-impulse-response (IIR) filter is the process of synthesizing and implementing a recursive filter network so that a set of prescribed excitations results a set of desired responses. However, the error surface of IIR filters is usually non-linear and multi-modal. In order to find the global minimum indeed, an improved differential evolution (DE) is proposed for digital IIR filter design in this paper. The suggested algorithm is a kind of DE variants with a controllable probabilistic (CPDE) population size. It considers the convergence speed and the computational cost simultaneously by nonperiodic partial increasing or declining individuals according to fitness diversities. In addition, we discuss as well some important aspects for IIR filter design, such as the cost function value, the influence of (noise) perturbations, the convergence rate and successful percentage, the parameter measurement, etc. As to the simulation result, it shows that the presented algorithm is viable and comparable. Compared with six existing State-of-the-Art algorithms-based digital IIR filter design methods obtained by numerical experiments, CPDE is relatively more promising and competitive. PMID:22808191

  8. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  9. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  10. Iterative Fourier transform algorithm: different approaches to diffractive optical element design

    NASA Astrophysics Data System (ADS)

    Skeren, Marek; Richter, Ivan; Fiala, Pavel

    2002-10-01

    This contribution focuses on the study and comparison of different design approaches for designing phase-only diffractive optical elements (PDOEs) for different possible applications in laser beam shaping. Especially, new results and approaches, concerning the iterative Fourier transform algorithm, are analyzed, implemented, and compared. Namely, various approaches within the iterative Fourier transform algorithm (IFTA) are analyzed for the case of phase-only diffractive optical elements with quantizied phase levels (either binary or multilevel structures). First, the general scheme of the IFTA iterative approach with partial quantization is briefly presented and discussed. Then, the special assortment of the general IFTA scheme is given with respect to quantization constraint strategies. Based on such a special classification, the three practically interesting approaches are chosen, further-analyzed, and compared to eachother. The performance of these algorithms is compared in detail in terms of the signal-to-noise ratio characteristic developments with respect to the numberof iterations, for various input diffusive-type objects chose. Also, the performance is documented on the complex spectra developments for typical computer reconstruction results. The advantages and drawbacks of all approaches are discussed, and a brief guide on the choice of a particular approach for typical design tasks is given. Finally, the two ways of amplitude elimination within the design procedure are considered, namely the direct elimination and partial elimination of the amplitude of the complex hologram function.

  11. Validation and application of modeling algorithms for the design of molecularly imprinted polymers.

    PubMed

    Liu, Bing; Ou, Lulu; Zhang, Fuyuan; Zhang, Zhijun; Li, Hongying; Zhu, Mengyu; Wang, Shuo

    2014-12-01

    In the study, four different semiempirical algorithms, modified neglect of diatomic overlap, a reparameterization of Austin Model 1, complete neglect of differential overlap and typed neglect of differential overlap, have been applied for the energy optimization of template, monomer, and template-monomer complexes of imprinted polymers. For phosmet-, estrone-, and metolcarb-imprinted polymers, the binding energies of template-monomer complexes were calculated and the docking configures were assessed in different molar ratio of template/monomer. It was found that two algorithms were not suitable for calculating the binding energy in template-monomers complex system. For the other algorithms, the obtained optimum molar ratio of template and monomers were consistent with the experimental results. Therefore, two algorithms have been selected and applied for the preparation of enrofloxacin-imprinted polymers. Meanwhile using a different molar ratio of template and monomer, we prepared imprinted polymers and nonimprinted polymers, and evaluated the adsorption to template. It was verified that the experimental results were in good agreement with the modeling results. As a result, the semiempirical algorithm had certain feasibility in designing the preparation of imprinted polymers.

  12. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively.

  13. Thermal design of spiral heat exchangers and heat pipes through global best algorithm

    NASA Astrophysics Data System (ADS)

    Turgut, Oğuz Emrah; Çoban, Mustafa Turhan

    2017-03-01

    This study deals with global best algorithm based thermal design of spiral heat exchangers and heat pipes. Spiral heat exchangers are devices which are highly efficient in extremely dirty and fouling process duties. Spirals inherent in design maintain high heat transfer coefficients while avoiding hazardous effects of fouling and uneven fluid distribution in the channels. Heat pipes have wide usage in industry. Thanks to the two phase cycle which takes part in operation, they can transfer high amount of heat with a negligible temperature gradient. In this work, a new stochastic based optimization method global best algorithm is applied for multi objective optimization of spiral heat exchangers as well as single objective optimization for heat pipes. Global best algorithm is easy-to-implement, free of derivatives and it can be reliably applied to any optimization problem. Case studies taken from the literature approaches are solved by the proposed algorithm and results obtained from the literature approaches are compared with thosed acquired by GBA. Comparisons reveal that GBA attains better results than literature studies in terms of solution accuracy and efficiency.

  14. A mission-oriented orbit design method of remote sensing satellite for region monitoring mission based on evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Zhang, Jing; Yao, Huang

    2015-12-01

    Remote sensing satellites play an increasingly prominent role in environmental monitoring and disaster rescue. Taking advantage of almost the same sunshine condition to same place and global coverage, most of these satellites are operated on the sun-synchronous orbit. However, it brings some problems inevitably, the most significant one is that the temporal resolution of sun-synchronous orbit satellite can't satisfy the demand of specific region monitoring mission. To overcome the disadvantages, two methods are exploited: the first one is to build satellite constellation which contains multiple sunsynchronous satellites, just like the CHARTER mechanism has done; the second is to design non-predetermined orbit based on the concrete mission demand. An effective method for remote sensing satellite orbit design based on multiobjective evolution algorithm is presented in this paper. Orbit design problem is converted into a multi-objective optimization problem, and a fast and elitist multi-objective genetic algorithm is utilized to solve this problem. Firstly, the demand of the mission is transformed into multiple objective functions, and the six orbit elements of the satellite are taken as genes in design space, then a simulate evolution process is performed. An optimal resolution can be obtained after specified generation via evolution operation (selection, crossover, and mutation). To examine validity of the proposed method, a case study is introduced: Orbit design of an optical satellite for regional disaster monitoring, the mission demand include both minimizing the average revisit time internal of two objectives. The simulation result shows that the solution for this mission obtained by our method meet the demand the users' demand. We can draw a conclusion that the method presented in this paper is efficient for remote sensing orbit design.

  15. Design of a System That Understands Informal Specifications.

    DTIC Science & Technology

    1983-04-01

    Identify. by block numib., formal specifications, English specifications, modules, natural language processing, abstract data types, logic-programming...Horn clauses 20. ABSTRACT (Continue an reverse aide if necessary and Identify by block numiber) This paper describes a system for understanding English ...University of Delaware Newark, DE 19711 28 April 1983 ABSTRACT This paper describes a system for understanding English definitions of software modules

  16. A requirements specification for a software design support system

    NASA Technical Reports Server (NTRS)

    Noonan, Robert E.

    1988-01-01

    Most existing software design systems (SDSS) support the use of only a single design methodology. A good SDSS should support a wide variety of design methods and languages including structured design, object-oriented design, and finite state machines. It might seem that a multiparadigm SDSS would be expensive in both time and money to construct. However, it is proposed that instead an extensible SDSS that directly implements only minimal database and graphical facilities be constructed. In particular, it should not directly implement tools to faciliate language definition and analysis. It is believed that such a system could be rapidly developed and put into limited production use, with the experience gained used to refine and evolve the systems over time.

  17. Development and benefit analysis of a sector design algorithm for terminal dynamic airspace configuration

    NASA Astrophysics Data System (ADS)

    Sciandra, Vincent

    performance of the algorithm generated sectors to the current sectors for a variety of configurations and scenarios, and comparing these results to those of the current sectors. The effect of dynamic airspace configurations will then be tested by observing the effects of update rate on the algorithm generated sector results. Finally, the algorithm will be used with simulated data, whose evaluation would show the ability of the sector design algorithm to meet the objectives of the NextGen system. Upon validation, the algorithm may be successfully incorporated into a larger Terminal Flow Algorithm, developed by our partners at Mosaic ATM, as the final step in the TDAC process.

  18. Attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm.

    PubMed

    Zhang, Jie; Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

  19. Smart energy management and low-power design of sensor and actuator nodes on algorithmic level for self-powered sensorial materials and robotics

    NASA Astrophysics Data System (ADS)

    Bosse, Stefan; Behrmann, Thomas

    2011-06-01

    We propose and demonstrate a design methodology for embedded systems satisfying low power requirements suitable for self-powered sensor and actuator nodes. This design methodology focuses on 1. smart energy management at runtime and 2. application-specific System-On- Chip (SoC) design at design time, contributing to low-power systems on both algorithmic and technology level. Smart energy management is performed spatially at runtime by a behaviour-based or state-action-driven selection from a set of different (implemented) algorithms classified by their demand of computation power, and temporally by varying data processing rates. It can be shown that power/energy consumption of an application-specific SoC design depends strongly on computation complexity. Signal and control processing is modelled on abstract level using signal flow diagrams. These signal flow graphs are mapped to Petri Nets to enable direct high-level synthesis of digital SoC circuits using a multi-process architecture with the Communicating-Sequential-Process model on execution level. Power analysis using simulation techniques on gatelevel provides input for the algorithmic selection during runtime of the system, leading to a closed-loop design flow. Additionally, the signal-flow approach enables power management by varying the signal flow and data processing rates depending on actual energy consumption, estimated energy deposit, and required Quality-of-Service.

  20. Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Xiaowei; Slomka, Piotr J.; Diaz-Zamudio, Mariana; Germano, Guido; Berman, Daniel S.; Terzopoulos, Demetri; Dey, Damini

    2015-03-01

    Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.

  1. Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1995-01-01

    A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.

  2. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... manually mixing a known quantity of oil and a known quantity of water to form a mixture and manually... cause formation of static electricity. (e) A monitor must be designed to operate in each plane...

  3. 46 CFR 162.050-25 - Cargo monitor: Design specification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... manually mixing a known quantity of oil and a known quantity of water to form a mixture and manually... cause formation of static electricity. (e) A monitor must be designed to operate in each plane...

  4. NASA software specification and evaluation system design, part 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The research to develop methods for reducing the effort expended in software and verification is reported. The development of a formal software requirements methodology, a formal specifications language, a programming language, a language preprocessor, and code analysis tools are discussed.

  5. Optimization design of satellite separation systems based on Multi-Island Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Xingzhi; Chen, Xiaoqian; Zhao, Yong; Yao, Wen

    2014-03-01

    The separation systems are crucial for the launch of satellites. With respect to the existing design issues of satellite separation systems, an optimization design approach based on Multi-Island Genetic Algorithm is proposed, and a hierarchical optimization of system mass and separation angular velocity is designed. Multi-Island Genetic Algorithm is studied for the problem and the optimization parameters are discussed. Dynamic analysis of ADAMS used to validate the designs is integrated with iSIGHT. Then the optimization method is employed for a typical problem using the helical compression spring mechanism, and the corresponding objective functions are derived. It turns out that the mass of compression spring catapult is decreased by 30.7% after optimization and the angular velocity can be minimized considering spring stiffness errors. Moreover, ground tests and on-orbit flight indicate that the error of separation speed is controlled within 1% and the angular velocity is reduced by nearly 90%, which proves the design result and the optimization approach.

  6. Hydraulic design of a low-specific speed Francis runner for a hydraulic cooling tower

    NASA Astrophysics Data System (ADS)

    Ruan, H.; Luo, X. Q.; Liao, W. L.; Zhao, Y. P.

    2012-11-01

    The air blower in a cooling tower is normally driven by an electromotor, and the electric energy consumed by the electromotor is tremendous. The remaining energy at the outlet of the cooling cycle is considerable. This energy can be utilized to drive a hydraulic turbine and consequently to rotate the air blower. The purpose of this project is to recycle energy, lower energy consumption and reduce pollutant discharge. Firstly, a two-order polynomial is proposed to describe the blade setting angle distribution law along the meridional streamline in the streamline equation. The runner is designed by the point-to-point integration method with a specific blade setting angle distribution. Three different ultra-low-specificspeed Francis runners with different wrap angles are obtained in this method. Secondly, based on CFD numerical simulations, the effects of blade setting angle distribution on pressure coefficient distribution and relative efficiency have been analyzed. Finally, blade angles of inlet and outlet and control coefficients of blade setting angle distribution law are optimal variables, efficiency and minimum pressure are objective functions, adopting NSGA-II algorithm, a multi-objective optimization for ultra-low-specific speed Francis runner is carried out. The obtained results show that the optimal runner has higher efficiency and better cavitation performance.

  7. Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data

    PubMed Central

    Pal, Doyel; Chen, Tingting; Khethavath, Praveen

    2014-01-01

    Background Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients’ privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. Objective To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. Methods To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. Results We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other’s database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error

  8. Transportation Network with Fluctuating Input/Output Designed by the Bio-Inspired Physarum Algorithm

    PubMed Central

    Watanabe, Shin; Takamatsu, Atsuko

    2014-01-01

    In this paper, we propose designing transportation network topology and traffic distribution under fluctuating conditions using a bio-inspired algorithm. The algorithm is inspired by the adaptive behavior observed in an amoeba-like organism, plasmodial slime mold, more formally known as plasmodium of Physarum plycephalum. This organism forms a transportation network to distribute its protoplasm, the fluidic contents of its cell, throughout its large cell body. In this process, the diameter of the transportation tubes adapts to the flux of the protoplasm. The Physarum algorithm, which mimics this adaptive behavior, has been widely applied to complex problems, such as maze solving and designing the topology of railroad grids, under static conditions. However, in most situations, environmental conditions fluctuate; for example, in power grids, the consumption of electric power shows daily, weekly, and annual periodicity depending on the lifestyles or the business needs of the individual consumers. This paper studies the design of network topology and traffic distribution with oscillatory input and output traffic flows. The network topology proposed by the Physarum algorithm is controlled by a parameter of the adaptation process of the tubes. We observe various rich topologies such as complete mesh, partial mesh, Y-shaped, and V-shaped networks depending on this adaptation parameter and evaluate them on the basis of three performance functions: loss, cost, and vulnerability. Our results indicate that consideration of the oscillatory conditions and the phase-lags in the multiple outputs of the network is important: The building and/or maintenance cost of the network can be reduced by introducing the oscillating condition, and when the phase-lag among the outputs is large, the transportation loss can also be reduced. We use stability analysis to reveal how the system exhibits various topologies depending on the parameter. PMID:24586616

  9. Transportation network with fluctuating input/output designed by the bio-inspired Physarum algorithm.

    PubMed

    Watanabe, Shin; Takamatsu, Atsuko

    2014-01-01

    In this paper, we propose designing transportation network topology and traffic distribution under fluctuating conditions using a bio-inspired algorithm. The algorithm is inspired by the adaptive behavior observed in an amoeba-like organism, plasmodial slime mold, more formally known as plasmodium of Physarum plycephalum. This organism forms a transportation network to distribute its protoplasm, the fluidic contents of its cell, throughout its large cell body. In this process, the diameter of the transportation tubes adapts to the flux of the protoplasm. The Physarum algorithm, which mimics this adaptive behavior, has been widely applied to complex problems, such as maze solving and designing the topology of railroad grids, under static conditions. However, in most situations, environmental conditions fluctuate; for example, in power grids, the consumption of electric power shows daily, weekly, and annual periodicity depending on the lifestyles or the business needs of the individual consumers. This paper studies the design of network topology and traffic distribution with oscillatory input and output traffic flows. The network topology proposed by the Physarum algorithm is controlled by a parameter of the adaptation process of the tubes. We observe various rich topologies such as complete mesh, partial mesh, Y-shaped, and V-shaped networks depending on this adaptation parameter and evaluate them on the basis of three performance functions: loss, cost, and vulnerability. Our results indicate that consideration of the oscillatory conditions and the phase-lags in the multiple outputs of the network is important: The building and/or maintenance cost of the network can be reduced by introducing the oscillating condition, and when the phase-lag among the outputs is large, the transportation loss can also be reduced. We use stability analysis to reveal how the system exhibits various topologies depending on the parameter.

  10. Embedded EMD algorithm within an FPGA-based design to classify nonlinear SDOF systems

    NASA Astrophysics Data System (ADS)

    Jones, Jonathan D.; Pei, Jin-Song; Wright, Joseph P.; Tull, Monte P.

    2010-04-01

    Compared with traditional microprocessor-based systems, rapidly advancing field-programmable gate array (FPGA) technology offers a more powerful, efficient and flexible hardware platform. An FPGA and microprocessor (i.e., hardware and software) co-design is developed to classify three types of nonlinearities (including linear, hardening and softening) of a single-degree-of-freedom (SDOF) system subjected to free vibration. This significantly advances the team's previous work on using FPGAs for wireless structural health monitoring. The classification is achieved by embedding two important algorithms - empirical mode decomposition (EMD) and backbone curve analysis. Design considerations to embed EMD in FPGA and microprocessor are discussed. In particular, the implementation of cubic spline fitting and the challenges encountered using both hardware and software environments are discussed. The backbone curve technique is fully implemented within the FPGA hardware and used to extract instantaneous characteristics from the uniformly distributed data sets produced by the EMD algorithm as presented in a previous SPIE conference by the team. An off-the-shelf high-level abstraction tool along with the MATLAB/Simulink environment is utilized to manage the overall FPGA and microprocessor co-design. Given the limited computational resources of an embedded system, we strive for a balance between the maximization of computational efficiency and minimization of resource utilization. The value of this study lies well beyond merely programming existing algorithms in hardware and software. Among others, extensive and intensive judgment is exercised involving experiences and insights with these algorithms, which renders processed instantaneous characteristics of the signals that are well-suited for wireless transmission.

  11. An algorithm to design finite field multipliers using a self-dual normal basis

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1987-01-01

    Finite field multiplication is central in the implementation of some error-correcting coders. Massey and Omura have presented a revolutionary design for multiplication in a finite field. In their design, a normal base is utilized to represent the elements of the field. The concept of using a self-dual normal basis to design the Massey-Omura finite field multiplier is presented. Presented first is an algorithm to locate a self-dual normal basis for GF(2 sup m) for odd m. Then a method to construct the product function for designing the Massey-Omura multiplier is developed. It is shown that the construction of the product function base on a self-dual basis is simpler than that based on an arbitrary normal base.

  12. Genetic algorithm based design optimization of a permanent magnet brushless dc motor

    NASA Astrophysics Data System (ADS)

    Upadhyay, P. R.; Rajagopal, K. R.

    2005-05-01

    Genetic algorithm (GA) based design optimization of a permanent magnet brushless dc motor is presented in this paper. A 70 W, 350 rpm, ceiling fan motor with radial-filed configuration is designed by considering the efficiency as the objective function. Temperature-rise and motor weight are the constraints and the slot electric loading, magnet-fraction, slot-fraction, airgap, and airgap flux density are the design variables. The efficiency and the phase-inductance of the motor designed using the developed CAD program are improved by using the GA based optimization technique; from 84.75% and 5.55 mH to 86.06% and 2.4 mH, respectively.

  13. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  14. Design and simulation of imaging algorithm for Fresnel telescopy imaging system

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-yu; Liu, Li-ren; Yan, Ai-min; Sun, Jian-feng; Dai, En-wen; Li, Bing

    2011-06-01

    Fresnel telescopy (short for Fresnel telescopy full-aperture synthesized imaging ladar) is a new high resolution active laser imaging technique. This technique is a variant of Fourier telescopy and optical scanning holography, which uses Fresnel zone plates to scan target. Compare with synthetic aperture imaging ladar(SAIL), Fresnel telescopy avoids problem of time synchronization and space synchronization, which decreasing technical difficulty. In one-dimensional (1D) scanning operational mode for moving target, after time-to-space transformation, spatial distribution of sampling data is non-uniform because of the relative motion between target and scanning beam. However, as we use fast Fourier transform (FFT) in the following imaging algorithm of matched filtering, distribution of data should be regular and uniform. We use resampling interpolation to transform the data into two-dimensional (2D) uniform distribution, and accuracy of resampling interpolation process mainly affects the reconstruction results. Imaging algorithms with different resampling interpolation algorithms have been analysis and computer simulation are also given. We get good reconstruction results of the target, which proves that the designed imaging algorithm for Fresnel telescopy imaging system is effective. This work is found to have substantial practical value and offers significant benefit for high resolution imaging system of Fresnel telescopy laser imaging ladar.

  15. Regularizing common spatial patterns to improve BCI designs: unified theory and new algorithms.

    PubMed

    Lotte, Fabien; Guan, Cuntai

    2011-02-01

    One of the most popular feature extraction algorithms for brain-computer interfaces (BCI) is common spatial patterns (CSPs). Despite its known efficiency and widespread use, CSP is also known to be very sensitive to noise and prone to overfitting. To address this issue, it has been recently proposed to regularize CSP. In this paper, we present a simple and unifying theoretical framework to design such a regularized CSP (RCSP). We then present a review of existing RCSP algorithms and describe how to cast them in this framework. We also propose four new RCSP algorithms. Finally, we compare the performances of 11 different RCSP (including the four new ones and the original CSP), on electroencephalography data from 17 subjects, from BCI competition datasets. Results showed that the best RCSP methods can outperform CSP by nearly 10% in median classification accuracy and lead to more neurophysiologically relevant spatial filters. They also enable us to perform efficient subject-to-subject transfer. Overall, the best RCSP algorithms were CSP with Tikhonov regularization and weighted Tikhonov regularization, both proposed in this paper.

  16. Design and Evaluation of a Dynamic Programming Flight Routing Algorithm Using the Convective Weather Avoidance Model

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Grabbe, Shon; Mukherjee, Avijit

    2010-01-01

    The optimization of traffic flows in congested airspace with varying convective weather is a challenging problem. One approach is to generate shortest routes between origins and destinations while meeting airspace capacity constraint in the presence of uncertainties, such as weather and airspace demand. This study focuses on development of an optimal flight path search algorithm that optimizes national airspace system throughput and efficiency in the presence of uncertainties. The algorithm is based on dynamic programming and utilizes the predicted probability that an aircraft will deviate around convective weather. It is shown that the running time of the algorithm increases linearly with the total number of links between all stages. The optimal routes minimize a combination of fuel cost and expected cost of route deviation due to convective weather. They are considered as alternatives to the set of coded departure routes which are predefined by FAA to reroute pre-departure flights around weather or air traffic constraints. A formula, which calculates predicted probability of deviation from a given flight path, is also derived. The predicted probability of deviation is calculated for all path candidates. Routes with the best probability are selected as optimal. The predicted probability of deviation serves as a computable measure of reliability in pre-departure rerouting. The algorithm can also be extended to automatically adjust its design parameters to satisfy the desired level of reliability.

  17. The design and hardware implementation of a low-power real-time seizure detection algorithm.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Ward, Matthew P; Worth, Robert M; Roy, Kaushik; Irazoqui, Pedro P

    2009-10-01

    Epilepsy affects more than 1% of the world's population. Responsive neurostimulation is emerging as an alternative therapy for the 30% of the epileptic patient population that does not benefit from pharmacological treatment. Efficient seizure detection algorithms will enable closed-loop epilepsy prostheses by stimulating the epileptogenic focus within an early onset window. Critically, this is expected to reduce neuronal desensitization over time and lead to longer-term device efficacy. This work presents a novel event-based seizure detection algorithm along with a low-power digital circuit implementation. Hippocampal depth-electrode recordings from six kainate-treated rats are used to validate the algorithm and hardware performance in this preliminary study. The design process illustrates crucial trade-offs in translating mathematical models into hardware implementations and validates statistical optimizations made with empirical data analyses on results obtained using a real-time functioning hardware prototype. Using quantitatively predicted thresholds from the depth-electrode recordings, the auto-updating algorithm performs with an average sensitivity and selectivity of 95.3 +/- 0.02% and 88.9 +/- 0.01% (mean +/- SE(alpha = 0.05)), respectively, on untrained data with a detection delay of 8.5 s [5.97, 11.04] from electrographic onset. The hardware implementation is shown feasible using CMOS circuits consuming under 350 nW of power from a 250 mV supply voltage from simulations on the MIT 180 nm SOI process.

  18. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  19. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  20. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    DOE PAGES

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.; ...

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less

  1. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    SciTech Connect

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.; Castaing, Jeremy

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as a foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.

  2. ICRPfinder: a fast pattern design algorithm for coding sequences and its application in finding potential restriction enzyme recognition sites

    PubMed Central

    Li, Chao; Li, Yuhua; Zhang, Xiangmin; Stafford, Phillip; Dinu, Valentin

    2009-01-01

    Background Restriction enzymes can produce easily definable segments from DNA sequences by using a variety of cut patterns. There are, however, no software tools that can aid in gene building -- that is, modifying wild-type DNA sequences to express the same wild-type amino acid sequences but with enhanced codons, specific cut sites, unique post-translational modifications, and other engineered-in components for recombinant applications. A fast DNA pattern design algorithm, ICRPfinder, is provided in this paper and applied to find or create potential recognition sites in target coding sequences. Results ICRPfinder is applied to find or create restriction enzyme recognition sites by introducing silent mutations. The algorithm is shown capable of mapping existing cut-sites but importantly it also can generate specified new unique cut-sites within a specified region that are guaranteed not to be present elsewhere in the DNA sequence. Conclusion ICRPfinder is a powerful tool for finding or creating specific DNA patterns in a given target coding sequence. ICRPfinder finds or creates patterns, which can include restriction enzyme recognition sites, without changing the translated protein sequence. ICRPfinder is a browser-based JavaScript application and it can run on any platform, in on-line or off-line mode. PMID:19747395

  3. Electromagnetic sunscreen model: design of experiments on particle specifications.

    PubMed

    Lécureux, Marie; Deumié, Carole; Enoch, Stefan; Sergent, Michelle

    2015-10-01

    We report a numerical study on sunscreen design and optimization. Thanks to the combined use of electromagnetic modeling and design of experiments, we are able to screen the most relevant parameters of mineral filters and to optimize sunscreens. Several electromagnetic modeling methods are used depending on the type of particles, density of particles, etc. Both the sun protection factor (SPF) and the UVB/UVA ratio are considered. We show that the design of experiments' model should include interactions between materials and other parameters. We conclude that the material of the particles is a key parameter for the SPF and the UVB/UVA ratio. Among the materials considered, none is optimal for both. The SPF is also highly dependent on the size of the particles.

  4. Formal design specification of a Processor Interface Unit

    NASA Technical Reports Server (NTRS)

    Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.

    1992-01-01

    This report describes work to formally specify the requirements and design of a processor interface unit (PIU), a single-chip subsystem providing memory-interface bus-interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance-free operation, or both. The need for high-quality design assurance in such applications is an undisputed fact, given the disastrous consequences that even a single design flaw can produce. Thus, the further development and application of formal methods to fault-tolerant systems is of critical importance as these systems see increasing use in modern society.

  5. Specific volume coupling and convergence properties in hybrid particle/finite volume algorithms for turbulent reactive flows

    NASA Astrophysics Data System (ADS)

    Popov, Pavel P.; Wang, Haifeng; Pope, Stephen B.

    2015-08-01

    We investigate the coupling between the two components of a Large Eddy Simulation/Probability Density Function (LES/PDF) algorithm for the simulation of turbulent reacting flows. In such an algorithm, the Large Eddy Simulation (LES) component provides a solution to the hydrodynamic equations, whereas the Lagrangian Monte Carlo Probability Density Function (PDF) component solves for the PDF of chemical compositions. Special attention is paid to the transfer of specific volume information from the PDF to the LES code: the specific volume field contains probabilistic noise due to the nature of the Monte Carlo PDF solution, and thus the use of the specific volume field in the LES pressure solver needs careful treatment. Using a test flow based on the Sandia/Sydney Bluff Body Flame, we determine the optimal strategy for specific volume feedback. Then, the overall second-order convergence of the entire LES/PDF procedure is verified using a simple vortex ring test case, with special attention being given to bias errors due to the number of particles per LES Finite Volume (FV) cell.

  6. A new stochastic algorithm for proton exchange membrane fuel cell stack design optimization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Uttara

    2012-10-01

    This paper develops a new stochastic heuristic for proton exchange membrane fuel cell stack design optimization. The problem involves finding the optimal size and configuration of stand-alone, fuel-cell-based power supply systems: the stack is to be configured so that it delivers the maximum power output at the load's operating voltage. The problem apparently looks straightforward but is analytically intractable and computationally hard. No exact solution can be found, nor is it easy to find the exact number of local optima; we, therefore, are forced to settle with approximate or near-optimal solutions. This real-world problem, first reported in Journal of Power Sources 131, poses both engineering challenges and computational challenges and is representative of many of today's open problems in fuel cell design involving a mix of discrete and continuous parameters. The new algorithm is compared against genetic algorithm, simulated annealing, and (1+1)-EA. Statistical tests of significance show that the results produced by our method are better than the best-known solutions for this problem published in the literature. A finite Markov chain analysis of the new algorithm establishes an upper bound on the expected time to find the optimum solution.

  7. A high precision position sensor design and its signal processing algorithm for a maglev train.

    PubMed

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.

  8. A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks.

    PubMed

    Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza

    2015-01-01

    Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower's problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach.

  9. A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train

    PubMed Central

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582

  10. A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks

    PubMed Central

    Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza

    2015-01-01

    Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower’s problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach. PMID:26102502

  11. Automated Tactical Symbology System (TACSYM): System Design Specifications

    DTIC Science & Technology

    1984-03-01

    138. Personal Demand 39. Microphones 89. CBR 139. Repair Parts 40. Target Designator 90. Chemical 140. Wire 41. Visual Station 91. Combined Arms...allmi for modification of the database. Insertion and removal of data from the database is controlled by DATABsE . C. 2.1.4 Software Suppot t. The

  12. Requirement Specifications for a Design and Verification Unit.

    ERIC Educational Resources Information Center

    Pelton, Warren G.; And Others

    A research and development activity to introduce new and improved education and training technology into Bureau of Medicine and Surgery training is recommended. The activity, called a design and verification unit, would be administered by the Education and Training Sciences Department. Initial research and development are centered on the…

  13. As-built design specification for segment map (Sgmap) program

    NASA Technical Reports Server (NTRS)

    Tompkins, M. A. (Principal Investigator)

    1981-01-01

    The segment map program (SGMAP), which is part of the CLASFYT package, is described in detail. This program is designed to output symbolic maps or numerical dumps from LANDSAT cluster/classification files or aircraft ground truth/processed ground truth files which are in 'universal' format.

  14. Specifications for a COM Catalog Designed for Government Documents.

    ERIC Educational Resources Information Center

    Copeland, Nora S.; And Others

    Prepared in MARC format in accordance with the Ohio College Library Center (OCLC) standards, these specifications were developed at Colorado State University to catalog a group of government publications not listed in the Monthly Catalog of United States Publications. The resulting microfiche catalog produced through the OCLC Cataloging Subsystem…

  15. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  16. ParaDock: a flexible non-specific DNA--rigid protein docking algorithm.

    PubMed

    Banitt, Itamar; Wolfson, Haim J

    2011-11-01

    Accurate prediction of protein-DNA complexes could provide an important stepping stone towards a thorough comprehension of vital intracellular processes. Few attempts were made to tackle this issue, focusing on binding patch prediction, protein function classification and distance constraints-based docking. We introduce ParaDock: a novel ab initio protein-DNA docking algorithm. ParaDock combines short DNA fragments, which have been rigidly docked to the protein based on geometric complementarity, to create bent planar DNA molecules of arbitrary sequence. Our algorithm was tested on the bound and unbound targets of a protein-DNA benchmark comprised of 47 complexes. With neither addressing protein flexibility, nor applying any refinement procedure, CAPRI acceptable solutions were obtained among the 10 top ranked hypotheses in 83% of the bound complexes, and 70% of the unbound. Without requiring prior knowledge of DNA length and sequence, and within <2 h per target on a standard 2.0 GHz single processor CPU, ParaDock offers a fast ab initio docking solution.

  17. NASIS data base management system: IBM 360 TSS implementation. Volume 4: Program design specifications

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The design specifications for the programs and modules within the NASA Aerospace Safety Information System (NASIS) are presented. The purpose of the design specifications is to standardize the preparation of the specifications and to guide the program design. Each major functional module within the system is a separate entity for documentation purposes. The design specifications contain a description of, and specifications for, all detail processing which occurs in the module. Sub-models, reference tables, and data sets which are common to several modules are documented separately.

  18. Structure based re-design of the binding specificity of anti-apoptotic Bcl-xL

    PubMed Central

    Chen, T. Scott; Palacios, Hector; Keating, Amy E.

    2012-01-01

    Many native proteins are multi-specific and interact with numerous partners, which can confound analysis of their functions. Protein design provides a potential route to generating synthetic variants of native proteins with more selective binding profiles. Re-designed proteins could be used as research tools, diagnostics or therapeutics. In this work, we used a library screening approach to re-engineer the multi-specific anti-apoptotic protein Bcl-xL to remove its interactions with many of its binding partners, making it a high affinity and selective binder of the BH3 region of pro-apoptotic protein Bad. To overcome the enormity of the potential Bcl-xL sequence space, we developed and applied a computational/experimental framework that used protein structure information to generate focused combinatorial libraries. Sequence features were identified using structure-based modeling, and an optimization algorithm based on integer programming was used to select degenerate codons that maximally covered these features. A constraint on library size was used to ensure thorough sampling. Using yeast surface display to screen a designed library of Bcl-xL variants, we successfully identified a protein with ~1,000-fold improvement in binding specificity for the BH3 region of Bad over the BH3 region of Bim. Although negative design was targeted only against the BH3 region of Bim, the best re-designed protein was globally specific against binding to 10 other peptides corresponding to native BH3 motifs. Our design framework demonstrates an efficient route to highly specific protein binders and may readily be adapted for application to other design problems. PMID:23154169

  19. A review of design issues specific to hypersonic flight vehicles

    NASA Astrophysics Data System (ADS)

    Sziroczak, D.; Smith, H.

    2016-07-01

    This paper provides an overview of the current technical issues and challenges associated with the design of hypersonic vehicles. Two distinct classes of vehicles are reviewed; Hypersonic Transports and Space Launchers, their common features and differences are examined. After a brief historical overview, the paper takes a multi-disciplinary approach to these vehicles, discusses various design aspects, and technical challenges. Operational issues are explored, including mission profiles, current and predicted markets, in addition to environmental effects and human factors. Technological issues are also reviewed, focusing on the three major challenge areas associated with these vehicles: aerothermodynamics, propulsion, and structures. In addition, matters of reliability and maintainability are also presented. The paper also reviews the certification and flight testing of these vehicles from a global perspective. Finally the current stakeholders in the field of hypersonic flight are presented, summarizing the active programs and promising concepts.

  20. Septa design for a prostate specific PET camera

    SciTech Connect

    Qi, Jinyi; Huber, Jennifer S.; Huesman, Ronald H.; Moses, William W.; Derenzo, Stephen E.; Budinger, Thomas F.

    2003-11-15

    The recent development of new prostate tracers has motivated us to build a low cost PET camera optimized to image the prostate. Coincidence imaging of positron emitters is achieved using a pair of external curved detector banks. The bottom bank is fixed below the patient bed, and the top bank moves upward for patient access and downward for maximum sensitivity. In this paper, we study the design of septa for the prostate camera using Monte Carlo simulations. The system performance is measured by the detectability of a prostate lesion. We have studied 17 septa configurations. The results show that the design of septa has a large impact on the lesion detection at a given activity concentration. Significant differences are also observed between the lesion detectability and the conventional noise equivalent count (NEC) performance, indicating that the NEC is not appropriate for the detection task.

  1. Rational design of a triple helix-specific intercalating ligand.

    PubMed

    Escudé, C; Nguyen, C H; Kukreti, S; Janin, Y; Sun, J S; Bisagni, E; Garestier, T; Hélène, C

    1998-03-31

    DNA triple helices offer new perspectives toward oligonucleotide-directed gene regulation. However, the poor stability of some of these structures might limit their use under physiological conditions. Specific ligands can intercalate into DNA triple helices and stabilize them. Molecular modeling and thermal denaturation experiments suggest that benzo[f]pyrido[3, 4-b]quinoxaline derivatives intercalate into triple helices by stacking preferentially with the Hoogsteen-paired bases. Based on this model, it was predicted that a benzo[f]quino[3,4-b]quinoxaline derivative, which possesses an additional aromatic ring, could engage additional stacking interactions with the pyrimidine strand of the Watson-Crick double helix upon binding of this pentacyclic ligand to a triplex structure. This compound was synthesized. Thermal denaturation experiments and inhibition of restriction enzyme cleavage show that this new compound can indeed stabilize triple helices with great efficiency and specificity and/or induce triple helix formation under physiological conditions.

  2. The extended PP1 toolkit: designed to create specificity

    PubMed Central

    Bollen, Mathieu; Peti, Wolfgang; Ragusa, Michael J.; Beullens, Monique

    2011-01-01

    Protein Ser/Thr phosphatase-1 (PP1) catalyzes the majority of eukaryotic protein dephosphorylation reactions in a highly regulated and selective manner. Recent studies have identified an unusually diversified PP1 interactome with the properties of a regulatory toolkit. PP1-interacting proteins (PIPs) function as targeting subunits, substrates and/or inhibitors. As targeting subunits, PIPs contribute to substrate selection by bringing PP1 into the vicinity of specific substrates and by modulating substrate specificity via additional substrate docking sites or blocking substrate-binding channels. Many of the nearly 200 established mammalian PIPs are predicted to be intrinsically disordered, a property that facilitates their binding to a large surface area of PP1 via multiple docking motifs. These novel insights offer perspectives for the therapeutic targeting of PP1 by interfering with the binding of PIPs or substrates. PMID:20399103

  3. 14 CFR 91.705 - Operations within airspace designated as Minimum Navigation Performance Specification Airspace.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Minimum Navigation Performance Specification Airspace. 91.705 Section 91.705 Aeronautics and Space FEDERAL... Operations within airspace designated as Minimum Navigation Performance Specification Airspace. (a) Except as... airspace designated as Minimum Navigation Performance Specifications airspace unless— (1) The aircraft...

  4. Novel designed enediynes: molecular design, chemical synthesis, mode of cycloaromatization and guanine-specific DNA cleavage.

    PubMed

    Toshima, K; Ohta, K; Kano, T; Nakamura, T; Nakata, M; Kinoshita, M; Matsumura, S

    1996-01-01

    The molecular design and chemical synthesis of novel enediyne molecules related to the neocarzinostatin chromophore (1), and their chemical and DNA cleaving properties are described. The 10-membered enediyne triols 16-18 were effectively synthesized from xylitol (10) in a short step, and found to be quite stable when handled at room temperature. The representative and acylated enediyne 16 was cycloaromatized by 1,8-diazabicyclo[5.4.0]undec-7-ene (DBU) in cyclohexa-1,4-diene-benzene to give the benzenoid product 21 through a radical pathway. On the other hand, the enediyne 16 was cycloaromatized by diethylamine in dimethyl sulfoxide-Tris-HCl, pH 8.5 buffer to afford another benzenoid product 22 as a diethylamine adduct through a polar pathway. Furthermore, the enediynes 16-18 were found to exhibit guanine-specific DNA cleavage under weakly basic conditions with no additive.

  5. Computational thermodynamics, Gaussian processes and genetic algorithms: combined tools to design new alloys

    NASA Astrophysics Data System (ADS)

    Tancret, F.

    2013-06-01

    A new alloy design procedure is proposed, combining in a single computational tool several modelling and predictive techniques that have already been used and assessed in the field of materials science and alloy design: a genetic algorithm is used to optimize the alloy composition for target properties and performance on the basis of the prediction of mechanical properties (estimated by Gaussian process regression of data on existing alloys) and of microstructural constitution, stability and processability (evaluated by computational themodynamics). These tools are integrated in a unique Matlab programme. An example is given in the case of the design of a new nickel-base superalloy for future power plant applications (such as the ultra-supercritical (USC) coal-fired plant, or the high-temperature gas-cooled nuclear reactor (HTGCR or HTGR), where the selection criteria include cost, oxidation and creep resistance around 750 °C, long-term stability at service temperature, forgeability, weldability, etc.

  6. GADISI - Genetic Algorithms Applied to the Automatic Design of Integrated Spiral Inductors

    NASA Astrophysics Data System (ADS)

    Pereira, Pedro; Fino, M. Helena; Coito, Fernando; Ventim-Neves, Mário

    This work introduces a tool for the optimization of CMOS integrated spiral inductors. The main objective of this tool is to offer designers a first approach for the determination of the inductor layout parameters. The core of the tool is a Genetic Algorithm (GA) optimization procedure where technology constraints on the inductor layout parameters are considered. Further constraints regarding inductor design heuristics are also accounted for. Since the layout parameters are inherently discrete due to technology and topology constraints, discrete variable optimization techniques are used. The Matlab GA toolbox is used and the modifications on the GA functions, yielding technology feasible solutions is presented. For the sake of efficiency and simplicity the pi-model is used for characterizing the inductor. The validity of the design results obtained with the tool, is checked against circuit simulation with ASITIC.

  7. Eliminating chromatic aberration in Gauss-type lens design using a novel genetic algorithm.

    PubMed

    Fang, Yi-Chin; Tsai, Chen-Mu; Macdonald, John; Pai, Yang-Chieh

    2007-05-01

    Two different types of Gauss lens design, which effectively eliminate primary chromatic aberration, are presented using an efficient genetic algorithm (GA). The current GA has to deal with too many targets in optical global optimization so that the performance is not much improved. Generally speaking, achromatic aberrations have a great relationship with variable glass sets for all elements. For optics whose design is roughly convergent, glass sets for optics will play a significant role in axial and lateral color aberration. Therefore better results might be derived from the optimal process of eliminating achromatic aberration, which could be carried out by finding feasible glass sets in advance. As an alternative, we propose a new optimization process by using a GA and involving theories of geometrical optics in order to select the best optical glass combination. Two Gauss-type lens designs are employed in this research. First, a telephoto lens design is sensitive to axial aberration because of its long focal length, and second, a wide-angle Gauss design is complicated by lateral color aberration at the extreme corners because Gauss design is well known not to deal well with wide-angle problems. Without numbers of higher chief rays passing the element, it is difficult to correct lateral color aberration altogether for the Gauss design. The results and conclusions show that the attempts to eliminate primary chromatic aberrations were successful.

  8. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    PubMed

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The

  9. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications

    PubMed Central

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The

  10. Double images encryption method with resistance against the specific attack based on an asymmetric algorithm.

    PubMed

    Wang, Xiaogang; Zhao, Daomu

    2012-05-21

    A double-image encryption technique that based on an asymmetric algorithm is proposed. In this method, the encryption process is different from the decryption and the encrypting keys are also different from the decrypting keys. In the nonlinear encryption process, the images are encoded into an amplitude cyphertext, and two phase-only masks (POMs) generated based on phase truncation are kept as keys for decryption. By using the classical double random phase encoding (DRPE) system, the primary images can be collected by an intensity detector that located at the output plane. Three random POMs that applied in the asymmetric encryption can be safely applied as public keys. Simulation results are presented to demonstrate the validity and security of the proposed protocol.

  11. Restraint system design and evaluation for military specific applications

    NASA Astrophysics Data System (ADS)

    Karwaczynski, Sebastian

    This research focuses on designing an optimal restraint system for usage in a military vehicle applications. The designed restraint system must accommodate a wide range of DHM's and ATD's with and without PPE such as: helmet, boots, and body armor. The evaluation of the restraint systems were conducted in a simulated vehicle environment, which was utilized to downselect the ideal restraint system for this program. In December of 2011 the OCP TECD program was formulated to increase occupant protection. To do this, 3D computer models were created to accommodate the entire Soldier population in the Army. These models included the entire PPE, which were later utilized for space claim activities and for designing new seats and restraints, which would accommodate them. Additionally, guidelines to increase protection levels while providing optimal comfort to the Soldier were created. The current and emerging threats were evaluated and focused on at the time of the program inception. Throughout this program various activities were conducted for restraint downselection including Soldier evaluations of various restraint system configurations. The Soldiers were given an opportunity to evaluate each system in a representative seat, which allowed them to position themselves in a manner consistent with the mission requirements. Systems ranged from fully automated to manual adjustment type systems. An evaluation of each particular system was conducted and analyzed against the other systems. It was discovered that the restraint systems, which utilize retractors allowed for automatic webbing stowage and allowed for easier access and repeatability when donning and doffing the restraint. It was also found that when an aid was introduced to help the Soldier don the restraint, it was more likely that such system would be utilized. Restraints were evaluated in drop tower experiments in addition to actual blast tests. An evaluation with this amount of detail had not been attempted

  12. Advanced Free Flight Planner and Dispatcher's Workstation: Preliminary Design Specification

    NASA Technical Reports Server (NTRS)

    Wilson, J.; Wright, C.; Couluris, G. J.

    1997-01-01

    The National Aeronautics and Space Administration (NASA) has implemented the Advanced Air Transportation Technology (AATT) program to investigate future improvements to the national and international air traffic management systems. This research, as part of the AATT program, developed preliminary design requirements for an advanced Airline Operations Control (AOC) dispatcher's workstation, with emphasis on flight planning. This design will support the implementation of an experimental workstation in NASA laboratories that would emulate AOC dispatch operations. The work developed an airline flight plan data base and specified requirements for: a computer tool for generation and evaluation of free flight, user preferred trajectories (UPT); the kernel of an advanced flight planning system to be incorporated into the UPT-generation tool; and an AOC workstation to house the UPT-generation tool and to provide a real-time testing environment. A prototype for the advanced flight plan optimization kernel was developed and demonstrated. The flight planner uses dynamic programming to search a four-dimensional wind and temperature grid to identify the optimal route, altitude and speed for successive segments of a flight. An iterative process is employed in which a series of trajectories are successively refined until the LTPT is identified. The flight planner is designed to function in the current operational environment as well as in free flight. The free flight environment would enable greater flexibility in UPT selection based on alleviation of current procedural constraints. The prototype also takes advantage of advanced computer processing capabilities to implement more powerful optimization routines than would be possible with older computer systems.

  13. Home Care Nursing via Computer Networks: Justification and Design Specifications

    PubMed Central

    Brennan, Patricia Flatley

    1988-01-01

    High-tech home care includes the use of information technologies, such as computer networks, to provide direct care to patients in the home. This paper presents the justification and design of a project using a free, public access computer network to deliver home care nursing. The intervention attempts to reduce isolation and improve problem solving among home care patients and their informal caregivers. Three modules comprise the intervention: a decision module, a communications module, and an information data base. This paper describes the experimental evaluation of the project, and discusses issues in the delivery of nursing care via computers.

  14. Cooperativity and specificity of association of a designed transmembrane peptide.

    PubMed Central

    Gratkowski, Holly; Dai, Qing-Hong; Wand, A Joshua; DeGrado, William F; Lear, James D

    2002-01-01

    Thermodynamics studies aimed at quantitatively characterizing free energy effects of amino acid substitutions are not restricted to two state systems, but do require knowing the number of states involved in the equilibrium under consideration. Using analytical ultracentrifugation and NMR methods, we show here that a membrane-soluble peptide, MS1, designed by modifying the sequence of the water-soluble coiled-coil GCN4-P1, exhibits a reversible monomer-dimer-trimer association in detergent micelles with a greater degree of cooperativity in C14-betaine than in dodecyl phosphocholine detergents. PMID:12202385

  15. Adaptive filter design based on the LMS algorithm for delay elimination in TCR/FC compensators.

    PubMed

    Hooshmand, Rahmat Allah; Torabian Esfahani, Mahdi

    2011-04-01

    Thyristor controlled reactor with fixed capacitor (TCR/FC) compensators have the capability of compensating reactive power and improving power quality phenomena. Delay in the response of such compensators degrades their performance. In this paper, a new method based on adaptive filters (AF) is proposed in order to eliminate delay and increase the response of the TCR compensator. The algorithm designed for the adaptive filters is performed based on the least mean square (LMS) algorithm. In this design, instead of fixed capacitors, band-pass LC filters are used. To evaluate the filter, a TCR/FC compensator was used for nonlinear and time varying loads of electric arc furnaces (EAFs). These loads caused occurrence of power quality phenomena in the supplying system, such as voltage fluctuation and flicker, odd and even harmonics and unbalancing in voltage and current. The above design was implemented in a realistic system model of a steel complex. The simulation results show that applying the proposed control in the TCR/FC compensator efficiently eliminated delay in the response and improved the performance of the compensator in the power system.

  16. Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak

    2010-01-01

    Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions

  17. Optimization of the genetic operators and algorithm parameters for the design of a multilayer anti-reflection coating using the genetic algorithm

    NASA Astrophysics Data System (ADS)

    Patel, Sanjaykumar J.; Kheraj, Vipul

    2015-07-01

    This paper describes a systematic investigation on the use of the genetic algorithm (GA) to accomplish ultra-low reflective multilayer coating designs for optoelectronic device applications. The algorithm is implemented using LabVIEW as a programming tool. The effects of the genetic operators, such as the type of crossover and mutation, as well as algorithm parameters, such as population size and range of search space, on the convergence of design-solution were studied. Finally, the optimal design is obtained in terms of the thickness of each layer for the multilayer AR coating using optimized genetic operators and algorithm parameters. The program is successfully tested to design AR coating in NIR wavelength range to achieve average reflectivity (R) below 10-3 over the spectral bandwidth of 200 nm with different combinations of coating materials in the stack. The random-point crossover operator is found to exhibit a better convergence rate of the solution than single-point and double-point crossover. Periodically re-initializing the thickness value of a randomly selected layer from the stack effectively prevents the solution from becoming trapped in local minima and improves the convergence probability.

  18. Reliability Optimization Design for Contact Springs of AC Contactors Based on Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng; Su, Xiuping; Wu, Ziran; Xu, Chengwen

    The paper illustrates the procedure of reliability optimization modeling for contact springs of AC contactors under nonlinear multi-constraint conditions. The adaptive genetic algorithm (AGA) is utilized to perform reliability optimization on the contact spring parameters of a type of AC contactor. A method that changes crossover and mutation rates at different times in the AGA can effectively avoid premature convergence, and experimental tests are performed after optimization. The experimental result shows that the mass of each optimized spring is reduced by 16.2%, while the reliability increases to 99.9% from 94.5%. The experimental result verifies the correctness and feasibility of this reliability optimization designing method.

  19. Heat pipe design handbook, part 2. [digital computer code specifications

    NASA Technical Reports Server (NTRS)

    Skrabek, E. A.

    1972-01-01

    The utilization of a digital computer code for heat pipe analysis and design (HPAD) is described which calculates the steady state hydrodynamic heat transport capability of a heat pipe with a particular wick configuration, the working fluid being a function of wick cross-sectional area. Heat load, orientation, operating temperature, and heat pipe geometry are specified. Both one 'g' and zero 'g' environments are considered, and, at the user's option, the code will also perform a weight analysis and will calculate heat pipe temperature drops. The central porous slab, circumferential porous wick, arterial wick, annular wick, and axial rectangular grooves are the wick configurations which HPAD has the capability of analyzing. For Vol. 1, see N74-22569.

  20. An ultrasound-guided fluorescence tomography system: design and specification

    NASA Astrophysics Data System (ADS)

    D'Souza, Alisha V.; Flynn, Brendan P.; Kanick, Stephen C.; Torosean, Sason; Davis, Scott C.; Maytin, Edward V.; Hasan, Tayyaba; Pogue, Brian W.

    2013-03-01

    An ultrasound-guided fluorescence molecular tomography system is under development for in vivo quantification of Protoporphyrin IX (PpIX) during Aminolevulinic Acid - Photodynamic Therapy (ALA-PDT) of Basal Cell Carcinoma. The system is designed to combine fiber-based spectral sampling of PPIX fluorescence emission with co-registered ultrasound images to quantify local fluorophore concentration. A single white light source is used to provide an estimate of the bulk optical properties of tissue. Optical data is obtained by sequential illumination of a 633nm laser source at 4 linear locations with parallel detection at 5 locations interspersed between the sources. Tissue regions from segmented ultrasound images, optical boundary data, white light-informed optical properties and diffusion theory are used to estimate the fluorophore concentration in these regions. Our system and methods allow interrogation of both superficial and deep tissue locations up to PpIX concentrations of 0.025ug/ml.

  1. Robust nonlinear dynamic inversion flight control design using structured singular value synthesis based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ying, Sibin; Ai, Jianliang; Luo, Changhang; Wang, Peng

    2006-11-01

    Non-linear Dynamic Inversion (NDI) is a technique for control law design, which is based on the feedback linearization and achieving desired dynamic response characteristics. NDI requires an ideal and precise model, however, there must be some errors due to the modeling error or actuator faults, therefore the control law designed by NDI has less robustness. Combining with structured singular value μ synthesis method, the system's robustness can be improved notably. The designed controller, which uses the structured singular value μ synthesis method, has high dimensions, and the dimensions must be reduced when we calculate it. This paper presents a new method for the design of robust flight control, which uses structured singular value μ synthesis based on genetic algorithm. The designed controller, which uses this method, can reduce the dimensions obviously compared with the normal method of structured singular value synthesis, so it is easier for application. The presented method is applied to robustness controller design of some super maneuverable fighter. The simulation results show that the dynamic inversion control law achieves a high level of performance in post-stall maneuver condition, and the whole control system has perfect robustness and anti-disturbance ability.

  2. The design of ROM-type holographic memory with iterative Fourier transform algorithm

    NASA Astrophysics Data System (ADS)

    Akamatsu, Hideki; Yamada, Kai; Unno, Noriyuki; Yoshida, Shuhei; Taniguchi, Jun; Yamamoto, Manabu

    2013-03-01

    The research and development of the holographic data storage (HDS) is advanced, as one of the high-speed, mass storage systems of the next generation. Recently, along the development of the write-once system that uses photopolymer media, large capacity ROM type HDS which can replace conventional optical discs becomes important. In this study, we develop the ROM type HDS using a diffractive optical element (DOE), and verify the effectiveness of our approach. In order to design DOE, iterative Fourier transform algorithm was adopted, and DOE is fabricated with electron beam (EB) cutting and nanoimprint lithography. We optimize the phase distribution of the hologram by iterative Fourier transform algorithm known as Gerchberg-Saxton (GS) algorithm with the angular spectrum method. In the fabrication process, the phase distribution of the hologram is implicated as the concavity and convexity structure by the EB cutting and transcribed with nanoimprint lithography. At this time, the mold is formed as multiple-stage concavity and convexity. The purpose of multiple-stage concavity and convexity is to obtain high diffraction efficiency and signal-to-noise ratio (SNR). Fabricated trial model DOE is evaluated by the experiment.

  3. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  4. Design of Predistorter with Efficient Updating Algorithm of Power Amplifier with Memory Effect

    NASA Astrophysics Data System (ADS)

    Oishi, Yasuyuki; Kimura, Shigekazu; Fukuda, Eisuke; Takano, Takeshi; Takago, Daisuke; Daido, Yoshimasa; Araki, Kiyomichi

    This paper describes a method to design a predistorter (PD) for a GaN-FET power amplifier (PA) by using nonlinear parameters extracted from measured IMD which has asymmetrical peaks peculiar to a memory effect with a second-order lag. While, computationally efficient equations have been reported by C. Rey et al. for the memory effect with a first-order lag. Their equations are extended to be applicable to the memory effect with the second-order lag. The extension provides a recursive algorithm for cancellation signals of the PD each of which updating is made by using signals in only two sampling points. The algorithm is equivalent to a memory depth of two in computational efficiency. The required times for multiplications and additions are counted for the updating of all the cancellation signals and it is confirmed that the algorithm reduces computational intensity lower than half of a memory polynomial in recent papers. A computer simulation has clarified that the PD improves the adjacent channel leakage power ratio (ACLR) of OFDM signals with several hundred subcarriers corresponding to 4G mobile radio communications. It has been confirmed that a fifth-order PD is effective up to a higher power level close to 1dB compression. The improvement of error vector magnitude (EVM) by the PD is also simulated for OFDM signals of which the subcarrier channels are modulated by 16 QAM.

  5. Overall plant design specification Modular High Temperature Gas-cooled Reactor. Revision 9

    SciTech Connect

    1990-05-01

    Revision 9 of the ``Overall Plant Design Specification Modular High Temperature Gas-Cooled Reactor,`` DOE-HTGR-86004 (OPDS) has been completed and is hereby distributed for use by the HTGR Program team members. This document, Revision 9 of the ``Overall Plant Design Specification`` (OPDS) reflects those changes in the MHTGR design requirements and configuration resulting form approved Design Change Proposals DCP BNI-003 and DCP BNI-004, involving the Nuclear Island Cooling and Spent Fuel Cooling Systems respectively.

  6. Specification and preliminary design of an array processor

    NASA Technical Reports Server (NTRS)

    Slotnick, D. L.; Graham, M. L.

    1975-01-01

    The design of a computer suited to the class of problems typified by the general circulation of the atmosphere was investigated. A fundamental goal was that the resulting machine should have roughly 100 times the computing capability of an IBM 360/95 computer. A second requirement was that the machine should be programmable in a higher level language similar to FORTRAN. Moreover, the new machine would have to be compatible with the IBM 360/95 since the IBM machine would continue to be used for pre- and post-processing. A third constraint was that the cost of the new machine was to be significantly less than that of other extant machines of similar computing capability, such as the ILLIAC IV and CDC STAR. A final constraint was that it should be feasible to fabricate a complete system and put it in operation by early 1978. Although these objectives were generally met, considerable work remains to be done on the routing system.

  7. Psychosocial Risks Generated By Assets Specific Design Software

    NASA Astrophysics Data System (ADS)

    Remus, Furtună; Angela, Domnariu; Petru, Lazăr

    2015-07-01

    The human activity concerning an occupation is resultant from the interaction between the psycho-biological, socio-cultural and organizational-occupational factors. Tehnological development, automation and computerization that are to be found in all the branches of activity, the level of speed in which things develop, as well as reaching their complexity, require less and less physical aptitudes and more cognitive qualifications. The person included in the work process is bound in most of the cases to come in line with the organizational-occupational situations that are specific to the demands of the job. The role of the programmer is essencial in the process of execution of ordered softwares, thus the truly brilliant ideas can only come from well-rested minds, concentrated on their tasks. The actual requirements of the jobs, besides the high number of benefits and opportunities, also create a series of psycho-social risks, which can increase the level of stress during work activity, especially for those who work under pressure.

  8. Liquid Engine Design: Effect of Chamber Dimensions on Specific Impulse

    NASA Technical Reports Server (NTRS)

    Hoggard, Lindsay; Leahy, Joe

    2009-01-01

    Which assumption of combustion chemistry - frozen or equilibrium - should be used in the prediction of liquid rocket engine performance calculations? Can a correlation be developed for this? A literature search using the LaSSe tool, an online repository of old rocket data and reports, was completed. Test results of NTO/Aerozine-50 and Lox/LH2 subscale and full-scale injector and combustion chamber test results were found and studied for this task. NASA code, Chemical Equilibrium with Applications (CEA) was used to predict engine performance using both chemistry assumptions, defined here. Frozen- composition remains frozen during expansion through the nozzle. Equilibrium- instantaneous chemical equilibrium during nozzle expansion. Chamber parameters were varied to understand what dimensions drive chamber C* and Isp. Contraction Ratio is the ratio of the nozzle throat area to the area of the chamber. L is the length of the chamber. Characteristic chamber length, L*, is the length that the chamber would be if it were a straight tube and had no converging nozzle. Goal: Develop a qualitative and quantitative correlation for performance parameters - Specific Impulse (Isp) and Characteristic Velocity (C*) - as a function of one or more chamber dimensions - Contraction Ratio (CR), Chamber Length (L ) and/or Characteristic Chamber Length (L*). Determine if chamber dimensions can be correlated to frozen or equilibrium chemistry.

  9. Application of a Modified Garbage Code Algorithm to Estimate Cause-Specific Mortality and Years of Life Lost in Korea

    PubMed Central

    2016-01-01

    Years of life lost (YLLs) are estimated based on mortality and cause of death (CoD); therefore, it is necessary to accurately calculate CoD to estimate the burden of disease. The garbage code algorithm was developed by the Global Burden of Disease (GBD) Study to redistribute inaccurate CoD and enhance the validity of CoD estimation. This study aimed to estimate cause-specific mortality rates and YLLs in Korea by applying a modified garbage code algorithm. CoD data for 2010–2012 were used to calculate the number of deaths. The garbage code algorithm was then applied to calculate target cause (i.e., valid CoD) and adjusted CoD using the garbage code redistribution. The results showed that garbage code deaths accounted for approximately 25% of all CoD during 2010–2012. In 2012, lung cancer contributed the most to cause-specific death according to the Statistics Korea. However, when CoD was adjusted using the garbage code redistribution, ischemic heart disease was the most common CoD. Furthermore, before garbage code redistribution, self-harm contributed the most YLLs followed by lung cancer and liver cancer; however, after application of the garbage code redistribution, though self-harm was the most common leading cause of YLL, it is followed by ischemic heart disease and lung cancer. Our results showed that garbage code deaths accounted for a substantial amount of mortality and YLLs. The results may enhance our knowledge of burden of disease and help prioritize intervention settings by changing the relative importance of burden of disease. PMID:27775249

  10. Generalized computer algorithms for enthalpy, entropy and specific heat of superheated vapors

    NASA Astrophysics Data System (ADS)

    Cowden, Michael W.; Scaringe, Robert P.; Gebre-Amlak, Yonas D.

    This paper presents an innovative technique for the development of enthalpy, entropy, and specific heat correlations in the superheated vapor region. The method results in a prediction error of less than 5 percent and requires the storage of 39 constants for each fluid. These correlations are obtained by using the Beattie-Bridgeman equation of state and a least-squares regression for the coefficients involved.

  11. Application of Genetic Algorithm to the Design Optimization of Complex Energy Saving Glass Coating Structure

    NASA Astrophysics Data System (ADS)

    Johar, F. M.; Azmin, F. A.; Shibghatullah, A. S.; Suaidi, M. K.; Ahmad, B. H.; Abd Aziz, M. Z. A.; Salleh, S. N.; Shukor, M. Md

    2014-04-01

    Attenuation of GSM, GPS and personal communication signal leads to poor communication inside the building using regular shapes of energy saving glass coating. Thus, the transmission is very low. A brand new type of band pass frequency selective surface (FSS) for energy saving glass application is presented in this paper for one unit cell. Numerical Periodic Method of Moment approach according to a previous study has been applied to determine the new optimum design of one unit cell energy saving glass coating structure. Optimization technique based on the Genetic Algorithm (GA) is used to obtain an improved in return loss and transmission signal. The unit cell of FSS is designed and simulated using the CST Microwave Studio software at based on industrial, scientific and medical bands (ISM). A unique and irregular shape of an energy saving glass coating structure is obtained with lower return loss and improved transmission coefficient.

  12. Application of genetic algorithms to the optimization design of electron optical system

    NASA Astrophysics Data System (ADS)

    Gu, Changxin; Wu, M. Q.; Shan, Liying; Lin, G.

    2001-12-01

    The application of Genetic Algorithms (GAs) to the optimization design method, such as Simplex method and Powell method etc, can determine the final optimum structure and electric parameters of an electron optical system from given electron optical properties, but it may be landed in the localization of optimum search process. The GAs is a novel direct search optimization method based on principles of natural selection and survival of the fittest from natural evolution. Through the reproduction, crossover, and mutation iterative process, GAs can search the global optimum result. We applied the GAs to optimize an electron emission system and an extended field lens (EFL) respectively. The optimal structure and corresponding electrical parameters with a criterion of minimum objective function value, crossover radius for electron emission system and spherical aberration coefficient for EFL, have been searched and presented in this paper. The GAs, as a direct search method and an adaptive search technique, has significant advantage in the optimization design of electron optical systems.

  13. An infrared achromatic quarter-wave plate designed based on simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Huang, Zhanhua; Yang, Huaidong

    2017-03-01

    Quarter-wave plates are primarily used to change the polarization state of light. Their retardation usually varies depending on the wavelength of the incident light. In this paper, the design and characteristics of an achromatic quarter-wave plate, which is formed by a cascaded system of birefringent plates, are studied. For the analysis of the combination, we use Jones matrix method to derivate the general expressions of the equivalent retardation and the equivalent azimuth. The infrared achromatic quarter-wave plate is designed based on the simulated annealing (SA) algorithm. The maximum retardation variation and the maximum azimuth variation of this achromatic waveplate are only about 1.8 ° and 0.5 ° , respectively, over the entire wavelength range of 1250-1650 nm. This waveplate can change the linear polarized light into circular polarized light with a less than 3.2% degree of linear polarization (DOLP) over that wide wavelength range.

  14. Thermal Control System Design for 50kg-Class Micro-Satellite by Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Tsuda, Kenta; Okamoto, Atsushi; Chiba, Masakatsu; Okubo, Hiroshi; Azuma, Hisao; Sugiyama, Yoshihiko; Akita, Tsuyoshi; Nakamura, Yousuke; Imamura, Hiroaki

    A method is presented for designing the thermal control system for 50kg-class micro-satellite using a genetic algorithm. Replacing the thermal control system into a heat transfer model, i.e. a thermal network model, the problem is treated as an optimization problem to find suitable combinations of adapted thermal control elements under admissible function keeping the controlled temperature within a selected band width. Admissible function used herein consists of two parameters; the one is a slope of temperature variation and the second is an amplitude of temperature variation during on orbital motion of satellite. To demonstrate the validity of the proposed method, the method is applied to two examples for the thermal control system design of a 50kg-class micro-satellite, tentatively called “SOHLA-1”, under development.

  15. Development of a New De Novo Design Algorithm for Exploring Chemical Space.

    PubMed

    Mishima, Kazuaki; Kaneko, Hiromasa; Funatsu, Kimito

    2014-12-01

    In the first stage of development of new drugs, various lead compounds with high activity are required. To design such compounds, we focus on chemical space defined by structural descriptors. New compounds close to areas where highly active compounds exist will show the same degree of activity. We have developed a new de novo design system to search a target area in chemical space. First, highly active compounds are manually selected as initial seeds. Then, the seeds are entered into our system, and structures slightly different from the seeds are generated and pooled. Next, seeds are selected from the new structure pool based on the distance from target coordinates on the map. To test the algorithm, we used two datasets of ligand binding affinity and showed that the proposed generator could produce diverse virtual compounds that had high activity in docking simulations.

  16. Earth Observatory Satellite system definition study. Report no. 5: System design and specifications. Part 1: Observatory system element specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The performance, design, and quality assurance requirements for the Earth Observatory Satellite (EOS) Observatory and Ground System program elements required to perform the Land Resources Management (LRM) A-type mission are presented. The requirements for the Observatory element with the exception of the instruments specifications are contained in the first part.

  17. Multi-criteria optimal pole assignment robust controller design for uncertainty systems using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Sarjaš, Andrej; Chowdhury, Amor; Svečko, Rajko

    2016-09-01

    This paper presents the synthesis of an optimal robust controller design using the polynomial pole placement technique and multi-criteria optimisation procedure via an evolutionary computation algorithm - differential evolution. The main idea of the design is to provide a reliable fixed-order robust controller structure and an efficient closed-loop performance with a preselected nominally characteristic polynomial. The multi-criteria objective functions have quasi-convex properties that significantly improve convergence and the regularity of the optimal/sub-optimal solution. The fundamental aim of the proposed design is to optimise those quasi-convex functions with fixed closed-loop characteristic polynomials, the properties of which are unrelated and hard to present within formal algebraic frameworks. The objective functions are derived from different closed-loop criteria, such as robustness with metric ?∞, time performance indexes, controller structures, stability properties, etc. Finally, the design results from the example verify the efficiency of the controller design and also indicate broader possibilities for different optimisation criteria and control structures.

  18. ESPRESSO front end guiding algorithms: from design phase to implementation and validation toward the commissioning

    NASA Astrophysics Data System (ADS)

    Landoni, M.; Riva, M.; Pepe, F.; Aliverti, M.; Cabral, A.; Calderone, G.; Cirami, R.; Cristiani, S.; Di Marcantonio, P.; Genoni, M.; Mégevand, D.; Moschetti, M.; Oggioni, L.; Pariani, G.

    2016-08-01

    In this paper we will review the ESPRESSO guiding algorithm for the Front End subsystem. ESPRESSO, the Echelle Spectrograph for Rocky Exoplanets and Stable Spectroscopic Observations, will be installed on ESO's Very Large Telescope (VLT). The Front End Unit (FEU) is the ESPRESSO subsystem which collects the light coming from the Coudè Trains of all the Four Telescope Units (UTs), provides Field and Pupil stabilization better than 0.05'' via piezoelectric tip tilt devices and inject the beams into the Spectrograph fibers. The field and pupil stabilization is obtained through a re-imaging system that collects the halo of the light out of the Injection Fiber and the image of the telescope pupil. In particular, we will focus on the software design of the system starting from class diagram to actual implementation. A review of the theoretical mathematical background required to understand the final design is also reported. We will show the performance of the algorithm on the actual Front End by adoption of telescope simulator exploring various scientific requirements.

  19. The use of machine learning algorithms to design a generalized simplified denitrification model

    NASA Astrophysics Data System (ADS)

    Oehler, F.; Rutherford, J. C.; Coco, G.

    2010-04-01

    We designed generalized simplified models using machine learning algorithms (ML) to assess denitrification at the catchment scale. In particular, we designed an artificial neural network (ANN) to simulate total nitrogen emissions from the denitrification process. Boosted regression trees (BRT, another ML) was also used to analyse the relationships and the relative influences of different input variables towards total denitrification. To calibrate the ANN and BRT models, we used a large database obtained by collating datasets from the literature. We developed a simple methodology to give confidence intervals for the calibration and validation process. Both ML algorithms clearly outperformed a commonly used simplified model of nitrogen emissions, NEMIS. NEMIS is based on denitrification potential, temperature, soil water content and nitrate concentration. The ML models used soil organic matter % in place of a denitrification potential and pH as a fifth input variable. The BRT analysis reaffirms the importance of temperature, soil water content and nitrate concentration. Generality of the ANN model may also be improved if pH is used to differentiate between soil types. Further improvements in model performance can be achieved by lessening dataset effects.

  20. The use of machine learning algorithms to design a generalized simplified denitrification model

    NASA Astrophysics Data System (ADS)

    Oehler, F.; Rutherford, J. C.; Coco, G.

    2010-10-01

    We propose to use machine learning (ML) algorithms to design a simplified denitrification model. Boosted regression trees (BRT) and artificial neural networks (ANN) were used to analyse the relationships and the relative influences of different input variables towards total denitrification, and an ANN was designed as a simplified model to simulate total nitrogen emissions from the denitrification process. To calibrate the BRT and ANN models and test this method, we used a database obtained collating datasets from the literature. We used bootstrapping to compute confidence intervals for the calibration and validation process. Both ML algorithms clearly outperformed a commonly used simplified model of nitrogen emissions, NEMIS, which is based on denitrification potential, temperature, soil water content and nitrate concentration. The ML models used soil organic matter % in place of a denitrification potential and pH as a fifth input variable. The BRT analysis reaffirms the importance of temperature, soil water content and nitrate concentration. Generalization, although limited to the data space of the database used to build the ML models, could be improved if pH is used to differentiate between soil types. Further improvements in model performance and generalization could be achieved by adding more data.

  1. Performance of the Lidar Design and Data Algorithms for the GLAS Global Cloud and Aerosol Measurements

    NASA Technical Reports Server (NTRS)

    Spinhirne, James D.; Palm, Stephen P.; Hlavka, Dennis L.; Hart, William D.

    2007-01-01

    The Geoscience Laser Altimeter System (GLAS) launched in early 2003 is the first polar orbiting satellite lidar. The instrument design includes high performance observations of the distribution and optical scattering cross sections of atmospheric clouds and aerosol. The backscatter lidar operates at two wavelengths, 532 and 1064 nm. For the atmospheric cloud and aerosol measurements, the 532 nm channel was designed for ultra high efficiency with solid state photon counting detectors and etalon filtering. Data processing algorithms were developed to calibrate and normalize the signals and produce global scale data products of the height distribution of cloud and aerosol layers and their optical depths and particulate scattering cross sections up to the limit of optical attenuation. The paper will concentrate on the effectiveness and limitations of the lidar channel design and data product algorithms. Both atmospheric receiver channels meet and exceed their design goals. Geiger Mode Avalanche Photodiode modules are used for the 532 nm signal. The operational experience is that some signal artifacts and non-linearity require correction in data processing. As with all photon counting detectors, a pulse-pile-up calibration is an important aspect of the measurement. Additional signal corrections were found to be necessary relating to correction of a saturation signal-run-on effect and also for daytime data, a small range dependent variation in the responsivity. It was possible to correct for these signal errors in data processing and achieve the requirement to accurately profile aerosol and cloud cross section down to 10-7 llm-sr. The analysis procedure employs a precise calibration against molecular scattering in the mid-stratosphere. The 1064 nm channel detection employs a high-speed analog APD for surface and atmospheric measurements where the detection sensitivity is limited by detector noise and is over an order of magnitude less than at 532 nm. A unique feature of

  2. A real-coded genetic algorithm applied to optimum design of a low solidity vaned diffuser for diffuser pump

    NASA Astrophysics Data System (ADS)

    Li, Jun; Tsukamoto, Hiroshi

    2001-10-01

    A numerical procedure for hydrodynamic redesign of the conventional vaned diffuser into the low solidity vaned diffuser by means of a real-coded genetic algorithm with Boltzmann, Tournament and Roulette Wheel selection is presented. In the first part, an investigation on the relative efficiency of the different real-coded genetic algorithm is carried out on a typical mathematical test function. The real-coded genetic algorithm with Boltzmann selection shows the best optimization performance compared to the Tournament and Roulette Wheel selection. In the second part, an approach to redesign the vaned diffuser profile is introduced. Goal of the optimum design is to search the highest static pressure recovery coefficient and low solidity vaned diffuser. The result of the low solidity vaned diffuser optimum design confirms that the efficiency and optimization performance of the real-coded Boltzmann selection genetic algorithm outperforms the other selection methods. A comparison between the designed low solidity vaned diffuser and original vaned diffuser shows that the diffuser pump with the redesigned low solidity vaned diffuser has the higher static pressure recovery and improved total hydrodynamic performance. In addition, the smaller outlet diameter of designed vaned diffuser tends to a more compact size of diffuser pump compared to the original diffuser pump. The obtained results also demonstrate the real-coded Boltzmann selection genetic algorithm is a promising optimization algorithm for centrifugal pumps design.

  3. Software design specification. Part 2: Orbital Flight Test (OFT) detailed design specification. Volume 3: Applications. Book 2: System management

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The functions performed by the systems management (SM) application software are described along with the design employed to accomplish these functions. The operational sequences (OPS) control segments and the cyclic processes they control are defined. The SM specialist function control (SPEC) segments and the display controlled 'on-demand' processes that are invoked by either an OPS or SPEC control segment as a direct result of an item entry to a display are included. Each processing element in the SM application is described including an input/output table and a structured control flow diagram. The flow through the module and other information pertinent to that process and its interfaces to other processes are included.

  4. A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Chae, Han Gil

    Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the

  5. SP-Designer: a user-friendly program for designing species-specific primer pairs from DNA sequence alignments.

    PubMed

    Villard, Pierre; Malausa, Thibaut

    2013-07-01

    SP-Designer is an open-source program providing a user-friendly tool for the design of specific PCR primer pairs from a DNA sequence alignment containing sequences from various taxa. SP-Designer selects PCR primer pairs for the amplification of DNA from a target species on the basis of several criteria: (i) primer specificity, as assessed by interspecific sequence polymorphism in the annealing regions, (ii) the biochemical characteristics of the primers and (iii) the intended PCR conditions. SP-Designer generates tables, detailing the primer pair and PCR characteristics, and a FASTA file locating the primer sequences in the original sequence alignment. SP-Designer is Windows-compatible and freely available from http://www2.sophia.inra.fr/urih/sophia_mart/sp_designer/info_sp_designer.php.

  6. 40 CFR 55.15 - Specific designation of corresponding onshore areas.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Specific designation of corresponding onshore areas. 55.15 Section 55.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) OUTER CONTINENTAL SHELF AIR REGULATIONS § 55.15 Specific designation...

  7. GSP: A web-based platform for designing genome-specific primers in polyploids

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The sequences among subgenomes in a polyploid species have high similarity. This makes difficult to design genome-specific primers for sequence analysis. We present a web-based platform named GSP for designing genome-specific primers to distinguish subgenome sequences in the polyploid genome backgr...

  8. Optimization of Spherical Roller Bearing Design Using Artificial Bee Colony Algorithm and Grid Search Method

    NASA Astrophysics Data System (ADS)

    Tiwari, Rajiv; Waghole, Vikas

    2015-07-01

    Bearing standards impose restrictions on the internal geometry of spherical roller bearings. Geometrical and strength constraints conditions have been formulated for the optimization of bearing design. The long fatigue life is one of the most important criteria in the optimum design of bearing. The life is directly proportional to the dynamic capacity; hence, the objective function has been chosen as the maximization of dynamic capacity. The effect of speed and static loads acting on the bearing are also taken into account. Design variables for the bearing include five geometrical parameters: the roller diameter, the roller length, the bearing pitch diameter, the number of rollers, and the contact angle. There are a few design constraint parameters which are also included in the optimization, the bounds of which are obtained by initial runs of the optimization. The optimization program is made to run for different values of these design constraint parameters and a range of the parameters is obtained for which the objective function has a higher value. The artificial bee colony algorithm (ABCA) has been used to solve the constrained optimized problem and the optimum design is compared with the one obtained from the grid search method (GSM), both operating independently. Both the ABCA and the GSM have been finally combined together to reach the global optimum point. A constraint violation study has also been carried out to give priority to the constraint having greater possibility of violations. Optimized bearing designs show a better performance parameter with those specified in bearing catalogs. The sensitivity analysis of bearing parameters has also been carried out to see the effect of manufacturing tolerance on the objective function.

  9. Brief Report: exploratory analysis of the ADOS revised algorithm: specificity and predictive value with Hispanic children referred for autism spectrum disorders.

    PubMed

    Overton, Terry; Fielding, Cheryl; de Alba, Roman Garcia

    2008-07-01

    This study compared Autism diagnostic observation schedule (ADOS) algorithm scores of a sample of 26 children who were administered modules 1-3 of the ADOS with the scores obtained applying the revised ADOS algorithm proposed by Gotham et al. (2007). Results of this application were inconsistent, yielding slightly more accurate results for module 1. New algorithm scores on modules 2 and 3 remained consistent with the original algorithm scores. The Mann-Whitney U was applied to compare revised algorithm and clinical levels of social impairment to determine if significant differences were evident. Results of Mann-Whitney U analyses were inconsistent and demonstrated less specificity for children with milder levels of social impairment. The revised algorithm demonstrated accuracy for the more severe autistic group.

  10. Genetic algorithm for the design of electro-mechanical sigma delta modulator MEMS sensors.

    PubMed

    Wilcock, Reuben; Kraft, Michael

    2011-01-01

    This paper describes a novel design methodology using non-linear models for complex closed loop electro-mechanical sigma-delta modulators (EMΣΔM) that is based on genetic algorithms and statistical variation analysis. The proposed methodology is capable of quickly and efficiently designing high performance, high order, closed loop, near-optimal systems that are robust to sensor fabrication tolerances and electronic component variation. The use of full non-linear system models allows significant higher order non-ideal effects to be taken into account, improving accuracy and confidence in the results. To demonstrate the effectiveness of the approach, two design examples are presented including a 5th order low-pass EMΣΔM for a MEMS accelerometer, and a 6th order band-pass EMΣΔM for the sense mode of a MEMS gyroscope. Each example was designed using the system in less than one day, with very little manual intervention. The strength of the approach is verified by SNR performances of 109.2 dB and 92.4 dB for the low-pass and band-pass system respectively, coupled with excellent immunities to fabrication tolerances and parameter mismatch.

  11. Multidisciplinary design optimization of vehicle instrument panel based on multi-objective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Wu, Guangqiang

    2013-03-01

    Typical multidisciplinary design optimization(MDO) has gradually been proposed to balance performances of lightweight, noise, vibration and harshness(NVH) and safety for instrument panel(IP) structure in the automotive development. Nevertheless, plastic constitutive relation of Polypropylene(PP) under different strain rates, has not been taken into consideration in current reliability-based and collaborative IP MDO design. In this paper, based on tensile test under different strain rates, the constitutive relation of Polypropylene material is studied. Impact simulation tests for head and knee bolster are carried out to meet the regulation of FMVSS 201 and FMVSS 208, respectively. NVH analysis is performed to obtain mainly the natural frequencies and corresponding mode shapes, while the crashworthiness analysis is employed to examine the crash behavior of IP structure. With the consideration of lightweight, NVH, head and knee bolster impact performance, design of experiment(DOE), response surface model(RSM), and collaborative optimization(CO) are applied to realize the determined and reliability-based optimizations, respectively. Furthermore, based on multi-objective genetic algorithm(MOGA), the optimal Pareto sets are completed to solve the multi-objective optimization(MOO) problem. The proposed research ensures the smoothness of Pareto set, enhances the ability of engineers to make a comprehensive decision about multi-objectives and choose the optimal design, and improves the quality and efficiency of MDO.

  12. Preliminary design of the Carrisa Plains solar central receiver power plant. Volume II. Plant specifications

    SciTech Connect

    Price, R. E.

    1983-12-31

    The specifications and design criteria for all plant systems and subsystems used in developing the preliminary design of Carrisa Plains 30-MWe Solar Plant are contained in this volume. The specifications have been organized according to plant systems and levels. The levels are arranged in tiers. Starting at the top tier and proceeding down, the specification levels are the plant, system, subsystem, components, and fabrication. A tab number, listed in the index, has been assigned each document to facilitate document location.

  13. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  14. Mitigating Multipath Bias Using a Dual-Polarization Antenna: Theoretical Performance, Algorithm Design, and Simulation

    PubMed Central

    Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan

    2017-01-01

    It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna’s capability in mitigating short delay multipath—the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an

  15. Mitigating Multipath Bias Using a Dual-Polarization Antenna: Theoretical Performance, Algorithm Design, and Simulation.

    PubMed

    Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan

    2017-02-13

    It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna's capability in mitigating short delay multipath-the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an RHCP

  16. Design of measuring system for wire diameter based on sub-pixel edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yudong; Zhou, Wang

    2016-09-01

    Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.

  17. Hemodynamic Assessment of Compliance of Pre-Stressed Pulmonary Valve-Vasculature in Patient Specific Geometry Using an Inverse Algorithm

    NASA Astrophysics Data System (ADS)

    Hebbar, Ullhas; Paul, Anup; Banerjee, Rupak

    2016-11-01

    Image based modeling is finding increasing relevance in assisting diagnosis of Pulmonary Valve-Vasculature Dysfunction (PVD) in congenital heart disease patients. This research presents compliant artery - blood interaction in a patient specific Pulmonary Artery (PA) model. This is an improvement over our previous numerical studies which assumed rigid walled arteries. The impedance of the arteries and the energy transfer from the Right Ventricle (RV) to PA is governed by compliance, which in turn is influenced by the level of pre-stress in the arteries. In order to evaluate the pre-stress, an inverse algorithm was developed using an in-house script written in MATLAB and Python, and implemented using the Finite Element Method (FEM). This analysis used a patient specific material model developed by our group, in conjunction with measured pressure (invasive) and velocity (non-invasive) values. The analysis was performed on an FEM solver, and preliminary results indicated that the Main PA (MPA) exhibited higher compliance as well as increased hysteresis over the cardiac cycle when compared with the Left PA (LPA). The computed compliance values for the MPA and LPA were 14% and 34% lesser than the corresponding measured values. Further, the computed pressure drop and flow waveforms were in close agreement with the measured values. In conclusion, compliant artery - blood interaction models of patient specific geometries can play an important role in hemodynamics based diagnosis of PVD.

  18. SU-E-T-316: The Design of a Risk Index Method for 3D Patient Specific QA

    SciTech Connect

    Cho, W; Wu, H; Xing, L; Suh, T

    2014-06-01

    Purpose: To suggest a new guidance for the evaluation of 3D patient specific QA, a structure-specific risk-index (RI) method was designed and implemented. Methods: A new algorithm was designed to assign the score of Pass, Fail or Pass with Risk to all 3D voxels in each structure by improving a conventional Gamma Index (GI) algorithm, which implied the degree of the risk of under-dose to the treatment target or over-dose to the organ at risks (OAR). Structure-specific distance to agreement (DTOA), dose difference and minimum checkable dose were applied to the GI algorithm, and additional parameters such as dose gradient factor and dose limit of structures were used to the RI method. Maximum passing rate (PR) and minimum PR were designed and calculated for each structure with the RI method. 3D doses were acquired from a spine SBRT plan by simulating the shift of beam iso-center, and tested to show the feasibility of the suggested method. Results: When the iso-center was shifted by 1 mm, 2 mm, and 3 mm, the PR of conventional GI method between shifted and non-shifted 3D doses were 99.9%, 97.4%, and 89.7% for PTV, 99.8%, 84.8%, and 63.2% for spinal cord, and 100%, 99.5%, 91.7% for right lung. The minimum PRs from the RI method were 98.9%, 96.9%, and 89.5% for PTV, and 96.1%, 79.3%, 57.5% for spinal cord, and 92.5%, 92.0%, 84.4% for right lung, respectively. The maximum PRs from the RI method were equal or less than the PRs from the conventional GI evaluation. Conclusion: Designed 3D RI method showed more strict acceptance level than the conventional GI method, especially for OARs. The RI method is expected to give the degrees of risks in the delivered doses, as well as the degrees of agreements between calculated 3D doses and measured (or simulated) 3D doses.

  19. Signal design using nonlinear oscillators and evolutionary algorithms: application to phase-locked loop disruption.

    PubMed

    Olson, C C; Nichols, J M; Michalowicz, J V; Bucholtz, F

    2011-06-01

    This work describes an approach for efficiently shaping the response characteristics of a fixed dynamical system by forcing with a designed input. We obtain improved inputs by using an evolutionary algorithm to search a space of possible waveforms generated by a set of nonlinear, ordinary differential equations (ODEs). Good solutions are those that result in a desired system response subject to some input efficiency constraint, such as signal power. In particular, we seek to find inputs that best disrupt a phase-locked loop (PLL). Three sets of nonlinear ODEs are investigated and found to have different disruption capabilities against a model PLL. These differences are explored and implications for their use as input signal models are discussed. The PLL was chosen here as an archetypal example but the approach has broad applicability to any input∕output system for which a desired input cannot be obtained analytically.

  20. Design of Pipeline Multiplier Based on Modified Booth's Algorithm and Wallace Tree

    NASA Astrophysics Data System (ADS)

    Yao, Aihong; Li, Ling; Sun, Mengzhe

    A design of 32*32 bit pipelined multiplier is presented in this paper. The proposed multiplier is based on the modified booth algorithm and Wallace tree structure. In order to improve the throughput rate of the multiplier, pipeline architecture is introduced to the Wallace tree. Carry Select Adder is deployed to reduce the propagation delay of carry signal for the final level 64-bit adder. The multiplier is fully implemented with Verilog HDL and synthesized successfully with Quartus II. The experiment result shows that the resource consumption and power consumption is reduced to 2560LE and 120mW, the operating frequency is improved from 136.21MHz to 165.07MHz.

  1. FPGA-based real-time phase measuring profilometry algorithm design and implementation

    NASA Astrophysics Data System (ADS)

    Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng

    2016-11-01

    Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.

  2. Designing Daily Patrol Routes for Policing Based on ANT Colony Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, H.; Cheng, T.; Wise, S.

    2015-07-01

    In this paper, we address the problem of planning police patrol routes to regularly cover street segments of high crime density (hotspots) with limited police forces. A good patrolling strategy is required to minimise the average time lag between two consecutive visits to hotspots, as well as coordinating multiple patrollers and imparting unpredictability in patrol routes. Previous studies have designed different police patrol strategies for routing police patrol, but these strategies have difficulty in generalising to real patrolling and meeting various requirements. In this research we develop a new police patrolling strategy based on Bayesian method and ant colony algorithm. In this strategy, virtual marker (pheromone) is laid to mark the visiting history of each crime hotspot, and patrollers continuously decide which hotspot to patrol next based on pheromone level and other variables. Simulation results using real data testifies the effective, scalable, unpredictable and extensible nature of this strategy.

  3. Optimum design of vortex generator elements using Kriging surrogate modelling and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Neelakantan, Rithwik; Balu, Raman; Saji, Abhinav

    Vortex Generators (VG's) are small angled plates located in a span wise fashion aft of the leading edge of an aircraft wing. They control airflow over the upper surface of the wing by creating vortices which energise the boundary layer. The parameters considered for the optimisation study of the VG's are its height, orientation angle and location along the chord in a low subsonic flow over a NACA0012 airfoil. The objective function to be maximised is the L/D ratio of the airfoil. The design data are generated using the commercially available ANSYS FLUENT software and are modelled using a Kriging based interpolator. This surrogate model is used along with a Generic Algorithm software to arrive at the optimum shape of the VG's. The results of this study will be confirmed with actual wind tunnel tests on scaled models.

  4. Design of two-dimensional photonic crystals with large absolute band gaps using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Linfang; Ye, Zhuo; He, Sailing

    2003-07-01

    A two-stage genetic algorithm (GA) with a floating mutation probability is developed to design a two-dimensional (2D) photonic crystal of a square lattice with the maximal absolute band gap. The unit cell is divided equally into many square pixels, and each filling pattern of pixels with two dielectric materials corresponds to a chromosome consisting of binary digits 0 and 1. As a numerical example, the two-stage GA gives a 2D GaAs structure with a relative width of the absolute band gap of about 19%. After further optimization, a new 2D GaAs photonic crystal is found with an absolute band gap much larger than those reported before.

  5. Design and development of guidance navigation and control algorithms for spacecraft rendezvous and docking experimentation

    NASA Astrophysics Data System (ADS)

    Guglieri, Giorgio; Maroglio, Franco; Pellegrino, Pasquale; Torre, Liliana

    2014-01-01

    This paper presents the design of the GNC system of a ground test-bed for spacecraft rendezvous and docking experiments. The test-bed is developed within the STEPS project (Systems and Technologies for Space Exploration). The facility consists of a flat floor and two scaled vehicles, one active chaser and one “semi-active” target. Rendezvous and docking maneuvers are performed floating on the plane with pierced plates as lifting systems. The system is designed to work both with inertial and non-inertial reference frame, receiving signals from navigation sensors as: accelerometers, gyroscopes, laser meter, radio finder and video camera, and combining them with a digital filter. A Proportional-Integrative-Derivative control law and Pulse Width Modulators are used to command the cold gas thrusters of the chaser, and to follow an assigned trajectory with its specified velocity profile. The design and development of the guidance, navigation and control system and its architecture-including the software algorithms-are detailed in the paper, presenting a performance analysis based on a simulated environment. A complete description of the integrated subsystems is also presented.

  6. Aerodynamic Design Exploration for Reusable Launch Vehicle Using Genetic Algorithm with Navier Stokes Solver

    NASA Astrophysics Data System (ADS)

    Tatsukawa, Tomoaki; Nonomura, Taku; Oyama, Akira; Fujii, Kozo

    In this study, aerodynamic design exploration for reusable launch vehicle (RLV) is conducted using genetic algorithm with Navier-Stokes solver to understand the aerodynamic characteristics for various body configurations and find design information such as tradeoff information among objectives. The multi-objective aerodynamic design optimization for minimizing zero-lift drag at supersonic condition, maximizing maximum lift-to-drag ratio (L/D) at subsonic condition, maximizing maximum L/D at supersonic condition, and maximizing volume of shape is conducted for bi-conical shape RLV based on computational fluid dynamics (CFD). The total number of evaluation in multi-objective optimization is 400, and it is necessary for evaluating one body configuration to conduct 8 CFD runs. In total, 3200 CFD runs are conducted. The analysis of Pareto-optimal solutions shows that there are various trade-off relations among objectives clearly, and the analysis of flow fields shows that the shape for the minimum drag configuration is almost the same as that of the shape for the maximum L/D configuration at supersonic condition. The shape for the maximum L/D at subsonic condition obtains additional lift at the kink compared with the minimum drag configuration. It leads to enhancement of L/D.

  7. Experimental Design for Groundwater Pumping Estimation Using a Genetic Algorithm (GA) and Proper Orthogonal Decomposition (POD)

    NASA Astrophysics Data System (ADS)

    Siade, A. J.; Cheng, W.; Yeh, W. W.

    2010-12-01

    This study optimizes observation well locations and sampling frequencies for the purpose of estimating unknown groundwater extraction in an aquifer system. Proper orthogonal decomposition (POD) is used to reduce the groundwater flow model, thus reducing the computation burden and data storage space associated with solving this problem for heavily discretized models. This reduced model can store a significant amount of system information in a much smaller reduced state vector. Along with the sensitivity equation method, the proposed approach can efficiently compute the Jacobian matrix that forms the information matrix associated with the experimental design. The criterion adopted for experimental design is the maximization of the trace of the weighted information matrix. Under certain conditions, this is equivalent to the classical A-optimality criterion established in experimental design. A genetic algorithm (GA) is used to optimize the observation well locations and sampling frequencies for maximizing the collected information from the hydraulic head sampling at the observation wells. We applied the proposed approach to a hypothetical 30,000-node groundwater aquifer system. We studied the relationship among the number of observation wells, observation well locations, sampling frequencies, and the collected information for estimating unknown groundwater extraction.

  8. Passive vibration control via unusual geometries: the application of genetic algorithm optimization to structural design

    NASA Astrophysics Data System (ADS)

    Keane, A. J.

    1995-08-01

    In the majority of aerospace structures, vibration transmission problems are dealt with by the application of heavy, viscoelastic damping materials. More recently, interest has focussed on using active vibration control methods to reduce noise transmission. This paper examines a third, and potentially much cheaper method: that of redesigning the load bearing structure so that it has intrinsic, passive noise filtration characteristics. It shows that very significant, broadband noise isolation characteristics (of around 60 dB over a 100 Hz band) can be achieved without compromising other aspects of the design. Here, the genetic algorithm (GA), which is one of a number of recently developed evolutionary computing methods, is employed to produce the desired designs. The problem is set up as one in multi-dimensional optimization where the geometric parameters of the design are the free variables and the band averaged noise transmission the objective function. The problem is then constrained by the need to maintain structural integrity. Set out in this way, even a simple structural problem has many tens of variables; a real structure would have many hundreds. Consequently, the optimization domain is very time consuming for traditional methods to deal with. This is where modern evolutionary techniques become so useful: their convergence rates are typically less rapidly worsened by increases in the number of variables than those of more traditional methods. Even so, they must be used with some care to gain the best results.

  9. Design of coded aperture arrays by means of a global optimization algorithm

    NASA Astrophysics Data System (ADS)

    Lang, Haitao; Liu, Liren; Yang, Qingguo

    2006-08-01

    Coded aperture imaging (CAI) has evolved as a standard technique for imaging high energy photon sources and has found numerous applications. Coded aperture arrays (CAAs) are the most important devices in the applications of CAI. In recent years, many approaches were presented to design optimum or near-optimum CAAs. Uniformly redundant arrays (URAs) are the most successful CAAs for their cyclic autocorrelation consisting of a sequence of delta functions on a flat sidelobe which can easily be subtracted when the object has been reconstructed. Unfortunately, the existing methods can only be used to design URAs with limited number of array sizes and fixed autocorrelative sidelobe-to-peak ratio. In this paper, we presented a method to design more flexible URAs by means of a global optimization algorithm named DIRECT. By our approaches, we obtain various types of URAs including the filled URAs which can be constructed by existing methods and the sparse URAs which never be constructed and mentioned by existing papers as far as we know.

  10. Dealing with change in process choreographies: Design and implementation of propagation algorithms.

    PubMed

    Fdhila, Walid; Indiono, Conrad; Rinderle-Ma, Stefanie; Reichert, Manfred

    2015-04-01

    Enabling process changes constitutes a major challenge for any process-aware information system. This not only holds for processes running within a single enterprise, but also for collaborative scenarios involving distributed and autonomous partners. In particular, if one partner adapts its private process, the change might affect the processes of the other partners as well. Accordingly, it might have to be propagated to concerned partners in a transitive way. A fundamental challenge in this context is to find ways of propagating the changes in a decentralized manner. Existing approaches are limited with respect to the change operations considered as well as their dependency on a particular process specification language. This paper presents a generic change propagation approach that is based on the Refined Process Structure Tree, i.e., the approach is independent of a specific process specification language. Further, it considers a comprehensive set of change patterns. For all these change patterns, it is shown that the provided change propagation algorithms preserve consistency and compatibility of the process choreography. Finally, a proof-of-concept prototype of a change propagation framework for process choreographies is presented. Overall, comprehensive change support in process choreographies will foster the implementation and operational support of agile collaborative process scenarios.

  11. GA-based Design Algorithms for the Robust Synthetic Genetic Oscillators with Prescribed Amplitude, Period and Phase

    PubMed Central

    Chen, Bor-Sen; Chen, Po-Wei

    2010-01-01

    In the past decade, the development of synthetic gene networks has attracted much attention from many researchers. In particular, the genetic oscillator known as the repressilator has become a paradigm for how to design a gene network with a desired dynamic behaviour. Even though the repressilator can show oscillatory properties in its protein concentrations, their amplitudes, frequencies and phases are perturbed by the kinetic parametric fluctuations (intrinsic molecular perturbations) and external disturbances (extrinsic molecular noises) of the environment. Therefore, how to design a robust genetic oscillator with desired amplitude, frequency and phase under stochastic intrinsic and extrinsic molecular noises is an important topic for synthetic biology. In this study, based on periodic reference signals with arbitrary amplitudes, frequencies and phases, a robust synthetic gene oscillator is designed by tuning the kinetic parameters of repressilator via a genetic algorithm (GA) so that the protein concentrations can track the desired periodic reference signals under intrinsic and extrinsic molecular noises. GA is a stochastic optimization algorithm which was inspired by the mechanisms of natural selection and evolution genetics. By the proposed GA-based design algorithm, the repressilator can track the desired amplitude, frequency and phase of oscillation under intrinsic and extrinsic noises through the optimization of fitness function. The proposed GA-based design algorithm can mimic the natural selection in evolutionary process to select adequate kinetic parameters for robust genetic oscillators. The design method can be easily extended to any synthetic gene network design with prescribed behaviours. PMID:20535234

  12. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  13. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    SciTech Connect

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  14. Algorithms and theory for the design and programming of industrial control systems materialized with PLC's

    NASA Astrophysics Data System (ADS)

    Montoya Villena, Rafael

    According to its title, the general objective of the Thesis consists in developing a clear, simple and systematic methodology for programming type PLC devices. With this aim in mind, we will use the following elements: Codification of all variables types. This section is very important since it allows us working with little information. The necessary rules are given to codify all type of phrases produced in industrial processes. An algorithm that describes process evolution and that has been called process D.F. This is one of the most important contributions, since it will allow us, together with information codification, representing the process evolution in a graphic way and with any design theory used. Theory selection. Evidently, the use of some kind of design method is necessary to obtain logic equations. For this particular case, we will use binodal theory, an ideal theory for wired technologies, since it can obtain highly reduced schemas for relatively simple automatisms, which means a minimum number of components used. User program outline algorithm (D.F.P.). This is another necessary contribution and perhaps the most important one, since logic equations resulting from binodal theory are compatible with process evolution if wired technology is used, whether it is electric, electronic, pneumatic, etc. On the other hand, PLC devices performance characteristics force the program instructions order to validate or not the automatism, as we have proven in different articles and lectures at congresses both national and international. Therefore, we will codify any information concerning the automating process, graphically represent its temporal evolution and, applying binodal theory and D.F.P (previously adapted), succeed in making logic equations compatible with the process to be automated and the device in which they will be implemented (PLC in our case)

  15. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    PubMed

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  16. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins

    PubMed Central

    Afanasyev, Vsevolod; Buldyrev, Sergey V.; Dunn, Michael J.; Robst, Jeremy; Preston, Mark; Bremner, Steve F.; Briggs, Dirk R.; Brown, Ruth; Adlard, Stacey; Peat, Helen J.

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge’s accurate performance and demonstrates how its design is a significant improvement on existing systems. PMID:25894763

  17. Designing high specificity anti-cancer nanocarriers by exploiting non-equilibrium effects

    NASA Astrophysics Data System (ADS)

    Tsekouras, Konstantinos; Goncharenko, Igor; Colvin, Michael; Huang, Kerwyn; Gopinathan, Ajay

    2012-11-01

    Although targeting of cancer cells using drug-delivering nanocarriers holds promise for improving therapeutic agent specificity, the strategy of maximizing ligand affinity for receptors overexpressed on cancer cells is suboptimal. To determine design principles that maximize nanocarrier specificity for cancer cells, we studied a generalized kinetics-based theoretical model of nanocarriers with one or more ligands that specifically bind these overexpressed receptors. We show that kinetics inherent to the system play an important role in determining specificity and can in fact be exploited to attain orders of magnitude improvement in specificity. In contrast to the current trend of therapeutic design, we show that these specificity increases can generally be achieved by a combination of low rates of endocytosis and nanocarriers with multiple low-affinity ligands. These results are broadly robust across endocytosis mechanisms and drug-delivery protocols, suggesting the need for a paradigm shift in receptor- targeted drug-delivery design.

  18. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    SciTech Connect

    Castellano, T.; De Palma, L.; Laneve, D.; Strippoli, V.; Cuccovilllo, A.; Prudenzano, F.; Dimiccoli, V.; Losito, O.; Prisco, R.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  19. General asymmetric neutral networks and structure design by genetic algorithms: A learning rule for temporal patterns

    SciTech Connect

    Bornholdt, S.; Graudenz, D.

    1993-07-01

    A learning algorithm based on genetic algorithms for asymmetric neural networks with an arbitrary structure is presented. It is suited for the learning of temporal patterns and leads to stable neural networks with feedback.

  20. Machine-learning algorithms define pathogen-specific local immune fingerprints in peritoneal dialysis patients with bacterial infections.

    PubMed

    Zhang, Jingjing; Friberg, Ida M; Kift-Morgan, Ann; Parekh, Gita; Morgan, Matt P; Liuzzi, Anna Rita; Lin, Chan-Yu; Donovan, Kieron L; Colmont, Chantal S; Morgan, Peter H; Davis, Paul; Weeks, Ian; Fraser, Donald J; Topley, Nicholas; Eberl, Matthias

    2017-03-16

    The immune system has evolved to sense invading pathogens, control infection, and restore tissue integrity. Despite symptomatic variability in patients, unequivocal evidence that an individual's immune system distinguishes between different organisms and mounts an appropriate response is lacking. We here used a systematic approach to characterize responses to microbiologically well-defined infection in a total of 83 peritoneal dialysis patients on the day of presentation with acute peritonitis. A broad range of cellular and soluble parameters was determined in peritoneal effluents, covering the majority of local immune cells, inflammatory and regulatory cytokines and chemokines as well as tissue damage-related factors. Our analyses, utilizing machine-learning algorithms, demonstrate that different groups of bacteria induce qualitatively distinct local immune fingerprints, with specific biomarker signatures associated with Gram-negative and Gram-positive organisms, and with culture-negative episodes of unclear etiology. Even more, within the Gram-positive group, unique immune biomarker combinations identified streptococcal and non-streptococcal species including coagulase-negative Staphylococcus spp. These findings have diagnostic and prognostic implications by informing patient management and treatment choice at the point of care. Thus, our data establish the power of non-linear mathematical models to analyze complex biomedical datasets and highlight key pathways involved in pathogen-specific immune responses.

  1. Design Specifications for the Advanced Instructional Design Advisor (AIDA). Volume 1

    DTIC Science & Technology

    1992-01-01

    task order contract ( Task 6). 1.2.2 Instructional Design Guidelines: Functional Requirements The second thrust attempts to identify optimal methods...Training technicians to interpret written instructions is (or should be) more a matter of designing the instructions than training the students. The task of...troubleshooting action against an optimal trouble- shooting model so that it can suggest more fruitful approaches at appropriate times. The optimal model used by

  2. 36 CFR 907.5 - Specific responsibilities of designated Corporation official.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Corporation's planning and decision-making processes to ensure that environmental factors are properly... DEVELOPMENT CORPORATION ENVIRONMENTAL QUALITY § 907.5 Specific responsibilities of designated Corporation... pertaining to environmental protection and enhancement. (b) Establish and maintain working relationships...

  3. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design

    PubMed Central

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-01-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589

  4. Optimum Design and Analysis of Axial Hybrid Magnetic Bearings Using Multi-Objective Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Rao, J. S.; Tiwari, R.

    2012-01-01

    Design optimization of axial hybrid magnetic thrust bearings (with bias magnets) was carried out using multi-objective evolutionary algorithms (MOEAs) and compared with the case of electromagnetic bearings (without bias magnets). Mathematical models of objective functions and associated constraints are presented and discussed. The different aspects of implemented MOEA are discussed. It is observed that the size of the bearing with bias magnets is considerably reduced as compared to the case of those without bias magnets, with the objective function as the minimization of weight for the same operating conditions. Similarly, current densities aret reduced drastically with biased magnets when the objective function is chosen as the minimization of the power loss. For illustration of various performances of the bearing, a typical design has been chosen from the final optimized population by an "a posteriori" approach. Sensitivities for both the objective functions with respect to the outer radius, the inner radius, and the height of coil are observed to be approximately in the ratio 2.5:1.6:1. Analysis of final optimized population has been carried out and is compared with the case without bias magnets and some salient points are observed in the case of using bias magnets.

  5. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design.

    PubMed

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-07-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions.

  6. Practical Findings from Applying the PSD Model for Evaluating Software Design Specifications

    NASA Astrophysics Data System (ADS)

    Räisänen, Teppo; Lehto, Tuomas; Oinas-Kukkonen, Harri

    This paper presents practical findings from applying the PSD model to evaluating the support for persuasive features in software design specifications for a mobile Internet device. On the one hand, our experiences suggest that the PSD model fits relatively well for evaluating design specifications. On the other hand, the model would benefit from more specific heuristics for evaluating each technique to avoid unnecessary subjectivity. Better distinction between the design principles in the social support category would also make the model easier to use. Practitioners who have no theoretical background can apply the PSD model to increase the persuasiveness of the systems they design. The greatest benefit of the PSD model for researchers designing new systems may be achieved when it is applied together with a sound theory, such as the Elaboration Likelihood Model. Using the ELM together with the PSD model, one may increase the chances for attitude change.

  7. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  8. Dynamic composition of medical support services in the ICU: Platform and algorithm design details.

    PubMed

    Hristoskova, Anna; Moeyersoon, Dieter; Van Hoecke, Sofie; Verstichel, Stijn; Decruyenaere, Johan; De Turck, Filip

    2010-12-01

    The Intensive Care Unit (ICU) is an extremely data-intensive environment where each patient needs to be monitored 24/7. Bedside monitors continuously register vital patient values (such as serum creatinine, systolic blood pressure) which are recorded frequently in the hospital database (e.g. every 2 min in the ICU of the Ghent University Hospital), laboratories generate hundreds of results of blood and urine samples, and nurses measure blood pressure and temperature up to 4 times an hour. The processing of such large amount of data requires an automated system to support the physicians' daily work. The Intensive Care Service Platform (ICSP) offers the needed support through the development of medical support services for processing and monitoring patients' data. With an increased deployment of these medical support services, reusing existing services as building blocks to create new services offers flexibility to the developer and accelerates the design process. This paper presents a new addition to the ICSP, the Dynamic Composer for Web services. Based on a semantic description of the medical support services, this Composer enables a service to be executed by creating a composition of medical services that provide the needed calculations. The composition is achieved using various algorithms satisfying certain quality of service (QoS) constraints and requirements. In addition to the automatic composition the paper also proposes a recovery mechanism in case of unavailable services. When executing the composition of medical services, unavailable services are dynamically replaced by equivalent services or a new composition achieving the same result. The presented platform and QoS algorithms are put through extensive performance and scalability tests for typical ICU scenarios, in which basic medical services are composed to a complex patient monitoring service.

  9. BWM*: A Novel, Provable, Ensemble-based Dynamic Programming Algorithm for Sparse Approximations of Computational Protein Design.

    PubMed

    Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R

    2016-06-01

    Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.

  10. Automatic generation of conceptual database design tools from data model specifications

    SciTech Connect

    Hong, Shuguang.

    1989-01-01

    The problems faced in the design and implementation of database software systems based on object-oriented data models are similar to that of other software design, i.e., difficult, complex, yet redundant effort. Automatic generation of database software system has been proposed as a solution to the problems. In order to generate database software system for a variety of object-oriented data models, two critical issues: data model specification and software generation, must be addressed. SeaWeed is a software system that automatically generates conceptual database design tools from data model specifications. A meta model has been defined for the specification of a class of object-oriented data models. This meta model provides a set of primitive modeling constructs that can be used to express the semantics, or unique characteristics, of specific data models. Software reusability has been adopted for the software generation. The technique of design reuse is utilized to derive the requirement specification of the software to be generated from data model specifications. The mechanism of code reuse is used to produce the necessary reusable software components. This dissertation presents the research results of SeaWeed including the meta model, data model specification, a formal representation of design reuse and code reuse, and the software generation paradigm.

  11. HAL/SM system functional design specification. [systems analysis and design analysis of central processing units

    NASA Technical Reports Server (NTRS)

    Ross, C.; Williams, G. P. W., Jr.

    1975-01-01

    The functional design of a preprocessor, and subsystems is described. A structure chart and a data flow diagram are included for each subsystem. Also a group of intermodule interface definitions (one definition per module) is included immediately following the structure chart and data flow for a particular subsystem. Each of these intermodule interface definitions consists of the identification of the module, the function the module is to perform, the identification and definition of parameter interfaces to the module, and any design notes associated with the module. Also described are compilers and computer libraries.

  12. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 5: Specification for EROS operations control center

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The functional, performance, and design requirements for the Operations Control Center (OCC) of the Earth Observatory Satellite (EOS) system are presented. The OCC controls the operations of the EOS satellite to acquire mission data consisting of: (1) thematic mapper data, (2) multispectral scanner data on EOS-A, or High Resolution Pointable Imager data on EOS-B, and (3) data collection system (DCS) data. The various inputs to the OCC are identified. The functional requirements of the OCC are defined. The specific systems and subsystems of the OCC are described and block diagrams are provided.

  13. Better Educational Website Interface Design: The Implications from Gender-Specific Preferences in Graduate Students

    ERIC Educational Resources Information Center

    Hsu, Yu-chang

    2006-01-01

    This study investigated graduate students gender-specific preferences for certain website interface design features, intending to generate useful information for instructors in choosing and for website designers in creating educational websites. The features investigated in this study included colour value, major navigation buttons placement, and…

  14. Single Event Testing on Complex Devices: Test Like You Fly versus Test-Specific Design Structures

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2014-01-01

    We present a framework for evaluating complex digital systems targeted for harsh radiation environments such as space. Focus is limited to analyzing the single event upset (SEU) susceptibility of designs implemented inside Field Programmable Gate Array (FPGA) devices. Tradeoffs are provided between application-specific versus test-specific test structures.

  15. 36 CFR 907.5 - Specific responsibilities of designated Corporation official.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Specific responsibilities of... DEVELOPMENT CORPORATION ENVIRONMENTAL QUALITY § 907.5 Specific responsibilities of designated Corporation... parties information and advice on the Corporation's policies for protecting and enhancing the quality...

  16. Partially constrained ant colony optimization algorithm for the solution of constrained optimization problems: Application to storm water network design

    NASA Astrophysics Data System (ADS)

    Afshar, M. H.

    2007-04-01

    This paper exploits the unique feature of the Ant Colony Optimization Algorithm (ACOA), namely incremental solution building mechanism, to develop partially constraint ACO algorithms for the solution of optimization problems with explicit constraints. The method is based on the provision of a tabu list for each ant at each decision point of the problem so that some constraints of the problem are satisfied. The application of the method to the problem of storm water network design is formulated and presented. The network nodes are considered as the decision points and the nodal elevations of the network are used as the decision variables of the optimization problem. Two partially constrained ACO algorithms are formulated and applied to a benchmark example of storm water network design and the results are compared with those of the original unconstrained algorithm and existing methods. In the first algorithm the positive slope constraints are satisfied explicitly and the rest are satisfied by using the penalty method while in the second one the satisfaction of constraints regarding the maximum ratio of flow depth to the diameter are also achieved explicitly via the tabu list. The method is shown to be very effective and efficient in locating the optimal solutions and in terms of the convergence characteristics of the resulting ACO algorithms. The proposed algorithms are also shown to be relatively insensitive to the initial colony used compared to the original algorithm. Furthermore, the method proves itself capable of finding an optimal or near-optimal solution, independent of the discretisation level and the size of the colony used.

  17. A domain-specific design architecture for composite material design and aircraft part redesign

    NASA Technical Reports Server (NTRS)

    Punch, W. F., III; Keller, K. J.; Bond, W.; Sticklen, J.

    1992-01-01

    Advanced composites have been targeted as a 'leapfrog' technology that would provide a unique global competitive position for U.S. industry. Composites are unique in the requirements for an integrated approach to designing, manufacturing, and marketing of products developed utilizing the new materials of construction. Numerous studies extending across the entire economic spectrum of the United States from aerospace to military to durable goods have identified composites as a 'key' technology. In general there have been two approaches to composite construction: build models of a given composite materials, then determine characteristics of the material via numerical simulation and empirical testing; and experience-directed construction of fabrication plans for building composites with given properties. The first route sets a goal to capture basic understanding of a device (the composite) by use of a rigorous mathematical model; the second attempts to capture the expertise about the process of fabricating a composite (to date) at a surface level typically expressed in a rule based system. From an AI perspective, these two research lines are attacking distinctly different problems, and both tracks have current limitations. The mathematical modeling approach has yielded a wealth of data but a large number of simplifying assumptions are needed to make numerical simulation tractable. Likewise, although surface level expertise about how to build a particular composite may yield important results, recent trends in the KBS area are towards augmenting surface level problem solving with deeper level knowledge. Many of the relative advantages of composites, e.g., the strength:weight ratio, is most prominent when the entire component is designed as a unitary piece. The bottleneck in undertaking such unitary design lies in the difficulty of the re-design task. Designing the fabrication protocols for a complex-shaped, thick section composite are currently very difficult. It is in

  18. Genetic Algorithm for Innovative Device Designs in High-Efficiency III–V Nitride Light-Emitting Diodes

    SciTech Connect

    Zhu, Di; Schubert, Martin F.; Cho, Jaehee; Schubert, E. Fred; Crawford, Mary H.; Koleske, Daniel D.; Shim, Hyunwook; Sone, Cheolsoo

    2012-01-01

    Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III–V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.

  19. Optimal design of an irregular Fresnel lens for multiple light sources using a three-layered Hierarchical Genetic Algorithm.

    PubMed

    Chen, Wen-Gong; Uang, Chii-Maw; Jou, Chen-Hai

    2007-08-06

    A two-layered Hierarchical Genetic Algorithm (HGA) was proposed in a previous paper to solve the design problem of a large scale Fresnel lens used in a multiple-source lighting system. The research objective of this paper is to extend the previous work by utilizing a three-layered HGA. The goal of the suggested approach is to decrease the reliance on deciding the number of groove segments for the designed Fresnel lenses, as well as to increase the variety of groove angles in a segment to improve the performance of the designed Fresnel lens. The proposed algorithm will be applied on a simulated reading light system, and the simulation results demonstrate that the proposed approach not only makes the design of a large scale Fresnel lens more feasible but also works better than the previous one in both illuminance and uniformity for a simulated reading light system.

  20. Laser communication experiment. Volume 1: Design study report: Spacecraft transceiver. Part 3: LCE design specifications

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The requirements for the design, fabrication, performance, and testing of a 10.6 micron optical heterodyne receiver subsystem for use in a laser communication system are presented. The receiver subsystem, as a part of the laser communication experiment operates in the ATS 6 satellite and in a transportable ground station establishing two-way laser communications between the spacecraft and the transportable ground station. The conditions under which environmental tests are conducted are reported.

  1. Design of optimal pump-and-treat strategies for contaminated groundwater remediation using the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Kuo, Chin-Hwa; Michel, Anthony N.; Gray, William G.

    The problem of the placement of pumps and the selection of pumping rates are the most important issues in designing contaminated groundwater remediation systems using a pump-and-treat strategy. Three nonlinear optimization formulations are proposed to address these problems. The first problem formulation considers hydraulic constraints and reduces the plume concentration to a specified regulation standard value within a given planning time while minimizing capital cost. The second formulation minimizes residual contaminant in a fixed period under hydraulic contraints only. The third formulation is similar to the second formulation; however, in this formulation the number of pumps is prespecified by using the results from the first formulation. The inclusion of well installation costs in the first problem formulation results in a nonsmooth objective function. For such problems, only local optimum solutions can be expected by the use of conventional nonlinear optimization techniques. In the present paper, the simulated annealing algorithm is used to overcome these difficulties. Specific simulation studies indicate that the method advanced herein is promising and involves acceptable computation times.

  2. Designing specific protein-protein interactions using computation, experimental library screening, or integrated methods.

    PubMed

    Chen, T Scott; Keating, Amy E

    2012-07-01

    Given the importance of protein-protein interactions for nearly all biological processes, the design of protein affinity reagents for use in research, diagnosis or therapy is an important endeavor. Engineered proteins would ideally have high specificities for their intended targets, but achieving interaction specificity by design can be challenging. There are two major approaches to protein design or redesign. Most commonly, proteins and peptides are engineered using experimental library screening and/or in vitro evolution. An alternative approach involves using protein structure and computational modeling to rationally choose sequences predicted to have desirable properties. Computational design has successfully produced novel proteins with enhanced stability, desired interactions and enzymatic function. Here we review the strengths and limitations of experimental library screening and computational structure-based design, giving examples where these methods have been applied to designing protein interaction specificity. We highlight recent studies that demonstrate strategies for combining computational modeling with library screening. The computational methods provide focused libraries predicted to be enriched in sequences with the properties of interest. Such integrated approaches represent a promising way to increase the efficiency of protein design and to engineer complex functionality such as interaction specificity.

  3. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  4. Optimal design of a 3-leg 6-DOF parallel manipulator for a specific workspace

    NASA Astrophysics Data System (ADS)

    Fu, Jianxun; Gao, Feng

    2016-07-01

    Researchers seldom study optimum design of a six-degree-of-freedom(DOF) parallel manipulator with three legs based upon the given workspace. An optimal design method of a novel three-leg six-DOF parallel manipulator(TLPM) is presented. The mechanical structure of this robot is introduced, with this structure the kinematic constrain equations is decoupled. Analytical solutions of the forward kinematics are worked out, one configuration of this robot, including position and orientation of the end-effector are graphically displayed. Then, on the basis of several extreme positions of the kinematic performances, the task workspace is given. An algorithm of optimal designing is introduced to find the smallest dimensional parameters of the proposed robot. Examples illustrate the design results, and a design stability index is introduced, which ensures that the robot remains a safe distance from the boundary of sits actual workspace. Finally, one prototype of the robot is developed based on this method. This method can easily find appropriate kinematic parameters that can size a robot having the smallest workspace enclosing a predefined task workspace. It improves the design efficiency, ensures that the robot has a small mechanical size possesses a large given workspace volume, and meets the lightweight design requirements.

  5. Clinical evaluation of new automatic coronary-specific best cardiac phase selection algorithm for single-beat coronary CT angiography

    PubMed Central

    Xu, Lei; Fan, Zhanming; Liang, Junfu; Yan, Zixu; Sun, Zhonghua

    2017-01-01

    The aim of this study was to evaluate the workflow efficiency of a new automatic coronary-specific reconstruction technique (Smart Phase, GE Healthcare—SP) for selection of the best cardiac phase with least coronary motion when compared with expert manual selection (MS) of best phase in patients with high heart rate. A total of 46 patients with heart rates above 75 bpm who underwent single beat coronary computed tomography angiography (CCTA) were enrolled in this study. CCTA of all subjects were performed on a 256-detector row CT scanner (Revolution CT, GE Healthcare, Waukesha, Wisconsin, US). With the SP technique, the acquired phase range was automatically searched in 2% phase intervals during the reconstruction process to determine the optimal phase for coronary assessment, while for routine expert MS, reconstructions were performed at 5% intervals and a best phase was manually determined. The reconstruction and review times were recorded to measure the workflow efficiency for each method. Two reviewers subjectively assessed image quality for each coronary artery in the MS and SP reconstruction volumes using a 4-point grading scale. The average HR of the enrolled patients was 91.1±19.0bpm. A total of 204 vessels were assessed. The subjective image quality using SP was comparable to that of the MS, 1.45±0.85 vs 1.43±0.81 respectively (p = 0.88). The average time was 246 seconds for the manual best phase selection, and 98 seconds for the SP selection, resulting in average time saving of 148 seconds (60%) with use of the SP algorithm. The coronary specific automatic cardiac best phase selection technique (Smart Phase) improves clinical workflow in high heart rate patients and provides image quality comparable with manual cardiac best phase selection. Reconstruction of single-beat CCTA exams with SP can benefit the users with less experienced in CCTA image interpretation. PMID:28231322

  6. Designing mixed metal halide ammines for ammonia storage using density functional theory and genetic algorithms.

    PubMed

    Jensen, Peter Bjerre; Lysgaard, Steen; Quaade, Ulrich J; Vegge, Tejs

    2014-09-28

    Metal halide ammines have great potential as a future, high-density energy carrier in vehicles. So far known materials, e.g. Mg(NH3)6Cl2 and Sr(NH3)8Cl2, are not suitable for automotive, fuel cell applications, because the release of ammonia is a multi-step reaction, requiring too much heat to be supplied, making the total efficiency lower. Here, we apply density functional theory (DFT) calculations to predict new mixed metal halide ammines with improved storage capacities and the ability to release the stored ammonia in one step, at temperatures suitable for system integration with polymer electrolyte membrane fuel cells (PEMFC). We use genetic algorithms (GAs) to search for materials containing up to three different metals (alkaline-earth, 3d and 4d) and two different halides (Cl, Br and I) - almost 27,000 combinations, and have identified novel mixtures, with significantly improved storage capacities. The size of the search space and the chosen fitness function make it possible to verify that the found candidates are the best possible candidates in the search space, proving that the GA implementation is ideal for this kind of computational materials design, requiring calculations on less than two percent of the candidates to identify the global optimum.

  7. Comparison of simulated quenching algorithms for design of diffractive optical elements.

    PubMed

    Liu, J S; Caley, A J; Waddie, A J; Taghizadeh, M R

    2008-02-20

    We compare the performance of very fast simulated quenching; generalized simulated quenching, which unifies classical Boltzmann simulated quenching and Cauchy fast simulated quenching; and variable step size simulated quenching. The comparison is carried out by applying these algorithms to the design of diffractive optical elements for beam shaping of monochromatic, spatially incoherent light to a tightly focused image spot, whose central lobe should be smaller than the geometrical-optics limit. For generalized simulated quenching we choose values of visiting and acceptance shape parameters recommended by other investigators and use both a one-dimensional and a multidimensional Tsallis random number generator. We find that, under our test conditions, variable step size simulated quenching, which generates each parameter's new states based on the acceptance ratio instead of a certain theoretical probability distribution, produces the best results. Finally, we demonstrate experimentally a tightly focused image spot, with a central lobe 0.22-0.68 times the geometrical-optics limit and a relative sidelobe intensity 55%-60% that of the central maximum intensity.

  8. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    PubMed Central

    Mandal, Saptarshi

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830

  9. Towards the optimal design of an uncemented acetabular component using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Ghosh, Rajesh; Pratihar, Dilip Kumar; Gupta, Sanjay

    2015-12-01

    Aseptic loosening of the acetabular component (hemispherical socket of the pelvic bone) has been mainly attributed to bone resorption and excessive generation of wear particle debris. The aim of this study was to determine optimal design parameters for the acetabular component that would minimize bone resorption and volumetric wear. Three-dimensional finite element models of intact and implanted pelvises were developed using data from computed tomography scans. A multi-objective optimization problem was formulated and solved using a genetic algorithm. A combination of suitable implant material and corresponding set of optimal thicknesses of the component was obtained from the Pareto-optimal front of solutions. The ultra-high-molecular-weight polyethylene (UHMWPE) component generated considerably greater volumetric wear but lower bone density loss compared to carbon-fibre reinforced polyetheretherketone (CFR-PEEK) and ceramic. CFR-PEEK was located in the range between ceramic and UHMWPE. Although ceramic appeared to be a viable alternative to cobalt-chromium-molybdenum alloy, CFR-PEEK seems to be the most promising alternative material.

  10. Design of an iterative auto-tuning algorithm for a fuzzy PID controller

    NASA Astrophysics Data System (ADS)

    Saeed, Bakhtiar I.; Mehrdadi, B.

    2012-05-01

    Since the first application of fuzzy logic in the field of control engineering, it has been extensively employed in controlling a wide range of applications. The human knowledge on controlling complex and non-linear processes can be incorporated into a controller in the form of linguistic terms. However, with the lack of analytical design study it is becoming more difficult to auto-tune controller parameters. Fuzzy logic controller has several parameters that can be adjusted, such as: membership functions, rule-base and scaling gains. Furthermore, it is not always easy to find the relation between the type of membership functions or rule-base and the controller performance. This study proposes a new systematic auto-tuning algorithm to fine tune fuzzy logic controller gains. A fuzzy PID controller is proposed and applied to several second order systems. The relationship between the closed-loop response and the controller parameters is analysed to devise an auto-tuning method. The results show that the proposed method is highly effective and produces zero overshoot with enhanced transient response. In addition, the robustness of the controller is investigated in the case of parameter changes and the results show a satisfactory performance.

  11. Combining Interactive Infrastructure Modeling and Evolutionary Algorithm Optimization for Sustainable Water Resources Design

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Zagona, E. A.

    2013-12-01

    Population growth and climate change, combined with difficulties in building new infrastructure, motivate portfolio-based solutions to ensuring sufficient water supply. Powerful simulation models with graphical user interfaces (GUI) are often used to evaluate infrastructure portfolios; these GUI based models require manual modification of the system parameters, such as reservoir operation rules, water transfer schemes, or system capacities. Multiobjective evolutionary algorithm (MOEA) based optimization can be employed to balance multiple objectives and automatically suggest designs for infrastructure systems, but MOEA based decision support typically uses a fixed problem formulation (i.e., a single set of objectives, decisions, and constraints). This presentation suggests a dynamic framework for linking GUI-based infrastructure models with MOEA search. The framework begins with an initial formulation which is solved using a MOEA. Then, stakeholders can interact with candidate solutions, viewing their properties in the GUI model. This is followed by changes in the formulation which represent users' evolving understanding of exigent system properties. Our case study is built using RiverWare, an object-oriented, data-centered model that facilitates the representation of a diverse array of water resources systems. Results suggest that assumptions within the initial MOEA search are violated after investigating tradeoffs and reveal how formulations should be modified to better capture stakeholders' preferences.

  12. Audibility of aliasing distortion in sawtooth signals and its implications for oscillator algorithm design.

    PubMed

    Lehtonen, Heidi-Maria; Pekonen, Jussi; Välimäki, Vesa

    2012-10-01

    This paper investigates the audibility threshold of aliasing in computer-generated sawtooth signals. Listening tests were conducted to find out how much the aliased frequency components below and above the fundamental must be attenuated for them to be inaudible. The tested tones comprised the fundamental frequencies 415, 932, 1480, 2093, 3136, and 3951 Hz, presented at 60-dB SPL and 44.1-kHz sampling rate. The results indicate that above the fundamental the aliased components must be attenuated 0, 19, 26, 27, 32, and 41 dB for the corresponding fundamental frequencies, and below the fundamental the attenuation of 0, 3, 6, 11, 12, and 11 dB, respectively, is sufficient. The results imply that the frequency-masking phenomenon affects the perception of aliasing and that the masking effect is more prominent above the fundamental than below it. The A-weighted noise-to-mask ratio is proposed as a suitable quality measure for sawtooth signals containing aliasing. It was shown that the bandlimited impulse train, the differentiated parabolic waveform, and the fourth-order polynomial bandlimited step function synthesis algorithms are perceptually alias-free up to 1, 2, and 4 kHz, respectively. General design rules for antialiasing sawtooth oscillators are derived based on the results and on knowledge of level-dependence of masking.

  13. Efficient design method for cell allocation in hybrid CMOS/nanodevices using a cultural algorithm with chaotic behavior

    NASA Astrophysics Data System (ADS)

    Pan, Zhong-Liang; Chen, Ling; Zhang, Guang-Zhao

    2016-04-01

    The hybrid CMOS molecular (CMOL) circuit, which combines complementary metal-oxide-semiconductor (CMOS) components with nanoscale wires and switches, can exhibit significantly improved performance. In CMOL circuits, the nanodevices, which are called cells, should be placed appropriately and are connected by nanowires. The cells should be connected such that they follow the shortest path. This paper presents an efficient method of cell allocation in CMOL circuits with the hybrid CMOS/nanodevice structure; the method is based on a cultural algorithm with chaotic behavior. The optimal model of cell allocation is derived, and the coding of an individual representing a cell allocation is described. Then the cultural algorithm with chaotic behavior is designed to solve the optimal model. The cultural algorithm consists of a population space, a belief space, and a protocol that describes how knowledge is exchanged between the population and belief spaces. In this paper, the evolutionary processes of the population space employ a genetic algorithm in which three populations undergo parallel evolution. The evolutionary processes of the belief space use a chaotic ant colony algorithm. Extensive experiments on cell allocation in benchmark circuits showed that a low area usage can be obtained using the proposed method, and the computation time can be reduced greatly compared to that of a conventional genetic algorithm.

  14. Development of an integrated CAD-FEA system for patient-specific design of spinal cages.

    PubMed

    Zhang, Mingzheng; Pu, Fang; Xu, Liqiang; Zhang, Linlin; Liang, Hang; Li, Deyu; Wang, Yu; Fan, Yubo

    2017-03-01

    Spinal cages are used to create a suitable mechanical environment for interbody fusion in cases of degenerative spinal instability. Due to individual variations in bone structures and pathological conditions, patient-specific cages can provide optimal biomechanical conditions for fusion, strengthening patient recovery. Finite element analysis (FEA) is a valuable tool in the biomechanical evaluation of patient-specific cage designs, but the time- and labor-intensive process of modeling limits its clinical application. In an effort to facilitate the design and analysis of patient-specific spinal cages, an integrated CAD-FEA system (CASCaDeS, comprehensive analytical spinal cage design system) was developed. This system produces a biomechanical-based patient-specific design of spinal cages and is capable of rapid implementation of finite element modeling. By comparison with commercial software, this system was validated and proven to be both accurate and efficient. CASCaDeS can be used to design patient-specific cages with a superior biomechanical performance to commercial spinal cages.

  15. From ergonomics to design specifications: contributions to the design of a processing machine in a tire company.

    PubMed

    Moraes, A S P; Arezes, P M; Vasconcelos, R

    2012-01-01

    The development of ergonomics' recommendations, guidelines and standards are attempts to promote the integration of ergonomics into industrial contexts. Such developments result from several sources and professionals and represent the effort that has been done to develop healthier and safer work environments. However, the availability of large amount of data and documents regarding ergonomics does not guarantee their applicability. The main goal of this paper is to use a specific case to demonstrate how ergonomics criteria were developed in order to contribute to the design of workplaces. Based on the obtained results from research undertaken in a tire company, it was observed that the ergonomics criteria should be presented as design specifications in order to be used by engineers and designers. In conclusion, it is observed that the multiple constraint environment impeded the appliance of the ergonomics criteria. It was also observed that the knowledge on technical design and the acquaintance with ergonomic standards, the level of integration in the design team, and the ability to communicate with workers and other technical staff have paramount importance in integrating ergonomics criteria into the design process.

  16. Specification and Design of Electrical Flight System Architectures with SysML

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L., Jr.; Jimenez, Alejandro

    2012-01-01

    Modern space flight systems are required to perform more complex functions than previous generations to support space missions. This demand is driving the trend to deploy more electronics to realize system functionality. The traditional approach for the specification, design, and deployment of electrical system architectures in space flight systems includes the use of informal definitions and descriptions that are often embedded within loosely coupled but highly interdependent design documents. Traditional methods become inefficient to cope with increasing system complexity, evolving requirements, and the ability to meet project budget and time constraints. Thus, there is a need for more rigorous methods to capture the relevant information about the electrical system architecture as the design evolves. In this work, we propose a model-centric approach to support the specification and design of electrical flight system architectures using the System Modeling Language (SysML). In our approach, we develop a domain specific language for specifying electrical system architectures, and we propose a design flow for the specification and design of electrical interfaces. Our approach is applied to a practical flight system.

  17. Neural signal processing and closed-loop control algorithm design for an implanted neural recording and stimulation system.

    PubMed

    Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N

    2015-08-01

    A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed

  18. QXP: powerful, rapid computer algorithms for structure-based drug design.

    PubMed

    McMartin, C; Bohacek, R S

    1997-07-01

    New methods for docking, template fitting and building pseudo-receptors are described. Full conformational searches are carried out for flexible cyclic and acyclic molecules. QXP (quick explore) search algorithms are derived from the method of Monte Carlo perturbation with energy minimization in Cartesian space. An additional fast search step is introduced between the initial perturbation and energy minimization. The fast search produces approximate low-energy structures, which are likely to minimize to a low energy. For template fitting, QXP uses a superposition force field which automatically assigns short-range attractive forces to similar atoms in different molecules. The docking algorithms were evaluated using X-ray data for 12 protein-ligand complexes. The ligands had up to 24 rotatable bonds and ranged from highly polar to mostly nonpolar. Docking searches of the randomly disordered ligands gave rms differences between the lowest energy docked structure and the energy-minimized X-ray structure, of less than 0.76 A for 10 of the ligands. For all the ligands, the rms difference between the energy-minimized X-ray structure and the closest docked structure was less than 0.4 A, when parts of one of the molecules which are in the solvent were excluded from the rms calculation. Template fitting was tested using four ACE inhibitors. Three ACE templates have been previously published. A single run using QXP generated a series of templates which contained examples of each of the three. A pseudo-receptor, complementary to an ACE template, was built out of small molecules, such as pyrrole, cyclopentanone and propane. When individually energy minimized in the pseudo-receptor, each of the four ACE inhibitors moved with an rms of less than 0.25 A. After random perturbation, the inhibitors were docked into the pseudo-receptor. Each lowest energy docked structure matched the energy-minimized geometry with an rms of less than 0.08 A. Thus, the pseudo-receptor shows steric and

  19. Designing, Visualizing, and Discussing Algorithms within a CS 1 Studio Experience: An Empirical Study

    ERIC Educational Resources Information Center

    Hundhausen, Christopher D.; Brown, Jonathan L.

    2008-01-01

    Within the context of an introductory CS1 unit on algorithmic problem-solving, we are exploring the pedagogical value of a novel active learning activity--the "studio experience"--that actively engages learners with algorithm visualization technology. In a studio experience, student pairs are tasked with (a) developing a solution to an algorithm…

  20. Design of a Four-Element, Hollow-Cube Corner Retroreflector for Satellites by use of a Genetic Algorithm.

    PubMed

    Minato, A; Sugimoto, N

    1998-01-20

    A four-element retroreflector was designed for satellite laser ranging and Earth-satellite-Earth laser long-path absorption measurement of the atmosphere. The retroreflector consists of four symmetrically located corner retroreflectors. Each retroreflector element has curved mirrors and tuned dihedral angles to correct velocity aberrations. A genetic algorithm was employed to optimize dihedral angles of each element and the directions of the four elements. The optimized four-element retroreflector has high reflectance with a reasonably broad angular coverage. It is also shown that the genetic algorithm is effective for optimizing optics with many parameters.

  1. Custom-Designed Molecular Scissors for Site-Specific Manipulation of the Plant and Mammalian Genomes

    NASA Astrophysics Data System (ADS)

    Kandavelou, Karthikeyan; Chandrasegaran, Srinivasan

    Zinc finger nucleases (ZFNs) are custom-designed molecular scissors, engineered to cut at specific DNA sequences. ZFNs combine the zinc finger proteins (ZFPs) with the nonspecific cleavage domain of the FokI restriction enzyme. The DNA-binding specificity of ZFNs can be easily altered experimentally. This easy manipulation of the ZFN recognition specificity enables one to deliver a targeted double-strand break (DSB) to a genome. The targeted DSB stimulates local gene targeting by several orders of magnitude at that specific cut site via homologous recombination (HR). Thus, ZFNs have become an important experimental tool to make site-specific and permanent alterations to genomes of not only plants and mammals but also of many other organisms. Engineering of custom ZFNs involves many steps. The first step is to identify a ZFN site at or near the chosen chromosomal target within the genome to which ZFNs will bind and cut. The second step is to design and/or select various ZFP combinations that will bind to the chosen target site with high specificity and affinity. The DNA coding sequence for the designed ZFPs are then assembled by polymerase chain reaction (PCR) using oligonucleotides. The third step is to fuse the ZFP constructs to the FokI cleavage domain. The ZFNs are then expressed as proteins by using the rabbit reticulocyte in vitro transcription/translation system and the protein products assayed for their DNA cleavage specificity.

  2. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  3. Computationally designed high specificity inhibitors delineate the roles of BCL2 family proteins in cancer

    PubMed Central

    Berger, Stephanie; Procko, Erik; Margineantu, Daciana; Lee, Erinna F; Shen, Betty W; Zelter, Alex; Silva, Daniel-Adriano; Chawla, Kusum; Herold, Marco J; Garnier, Jean-Marc; Johnson, Richard; MacCoss, Michael J; Lessene, Guillaume; Davis, Trisha N; Stayton, Patrick S; Stoddard, Barry L; Fairlie, W Douglas; Hockenbery, David M; Baker, David

    2016-01-01

    Many cancers overexpress one or more of the six human pro-survival BCL2 family proteins to evade apoptosis. To determine which BCL2 protein or proteins block apoptosis in different cancers, we computationally designed three-helix bundle protein inhibitors specific for each BCL2 pro-survival protein. Following in vitro optimization, each inhibitor binds its target with high picomolar to low nanomolar affinity and at least 300-fold specificity. Expression of the designed inhibitors in human cancer cell lines revealed unique dependencies on BCL2 proteins for survival which could not be inferred from other BCL2 profiling methods. Our results show that designed inhibitors can be generated for each member of a closely-knit protein family to probe the importance of specific protein-protein interactions in complex biological processes. DOI: http://dx.doi.org/10.7554/eLife.20352.001 PMID:27805565

  4. Computationally designed high specificity inhibitors delineate the roles of BCL2 family proteins in cancer.

    PubMed

    Berger, Stephanie; Procko, Erik; Margineantu, Daciana; Lee, Erinna F; Shen, Betty W; Zelter, Alex; Silva, Daniel-Adriano; Chawla, Kusum; Herold, Marco J; Garnier, Jean-Marc; Johnson, Richard; MacCoss, Michael J; Lessene, Guillaume; Davis, Trisha N; Stayton, Patrick S; Stoddard, Barry L; Fairlie, W Douglas; Hockenbery, David M; Baker, David

    2016-11-02

    Many cancers overexpress one or more of the six human pro-survival BCL2 family proteins to evade apoptosis. To determine which BCL2 protein or proteins block apoptosis in different cancers, we computationally designed three-helix bundle protein inhibitors specific for each BCL2 pro-survival protein. Following in vitro optimization, each inhibitor binds its target with high picomolar to low nanomolar affinity and at least 300-fold specificity. Expression of the designed inhibitors in human cancer cell lines revealed unique dependencies on BCL2 proteins for survival which could not be inferred from other BCL2 profiling methods. Our results show that designed inhibitors can be generated for each member of a closely-knit protein family to probe the importance of specific protein-protein interactions in complex biological processes.

  5. Structural, kinetic, and thermodynamic studies of specificity designed HIV-1 protease

    SciTech Connect

    Alvizo, Oscar; Mittal, Seema; Mayo, Stephen L.; Schiffer, Celia A.

    2012-10-23

    HIV-1 protease recognizes and cleaves more than 12 different substrates leading to viral maturation. While these substrates share no conserved motif, they are specifically selected for and cleaved by protease during viral life cycle. Drug resistant mutations evolve within the protease that compromise inhibitor binding but allow the continued recognition of all these substrates. While the substrate envelope defines a general shape for substrate recognition, successfully predicting the determinants of substrate binding specificity would provide additional insights into the mechanism of altered molecular recognition in resistant proteases. We designed a variant of HIV protease with altered specificity using positive computational design methods and validated the design using X-ray crystallography and enzyme biochemistry. The engineered variant, Pr3 (A28S/D30F/G48R), was designed to preferentially bind to one out of three of HIV protease's natural substrates; RT-RH over p2-NC and CA-p2. In kinetic assays, RT-RH binding specificity for Pr3 increased threefold compared to the wild-type (WT), which was further confirmed by isothermal titration calorimetry. Crystal structures of WT protease and the designed variant in complex with RT-RH, CA-p2, and p2-NC were determined. Structural analysis of the designed complexes revealed that one of the engineered substitutions (G48R) potentially stabilized heterogeneous flap conformations, thereby facilitating alternate modes of substrate binding. Our results demonstrate that while substrate specificity could be engineered in HIV protease, the structural pliability of protease restricted the propagation of interactions as predicted. These results offer new insights into the plasticity and structural determinants of substrate binding specificity of the HIV-1 protease.

  6. Using Space Weather Variability in Evaluating the Radiation Environment Design Specifications for NASA's Constellation Program

    NASA Technical Reports Server (NTRS)

    Coffey, Victoria N.; Blackwell, William C.; Minow, Joseph I.; Bruce, Margaret B.; Howard, James W.

    2007-01-01

    NASA's Constellation program, initiated to fulfill the Vision for Space Exploration, will create a new generation of vehicles for servicing low Earth orbit, the Moon, and beyond. Space radiation specifications for space system hardware are necessarily conservative to assure system robustness for a wide range of space environments. Spectral models of solar particle events and trapped radiation belt environments are used to develop the design requirements for estimating total ionizing radiation dose, displacement damage, and single event effects for Constellation hardware. We first describe the rationale using the spectra chosen to establish the total dose and single event design environmental specifications for Constellation systems. We then compare variability of the space environment to the spectral design models to evaluate their applicability as conservative design environments and potential vulnerabilities to extreme space weather events

  7. Computational design of a red fluorophore ligase for site-specific protein labeling in living cells

    SciTech Connect

    Liu, Daniel S.; Nivon, Lucas G.; Richter, Florian; Goldman, Peter J.; Deerinck, Thomas J.; Yao, Jennifer Z.; Richardson, Douglas; Phipps, William S.; Ye, Anne Z.; Ellisman, Mark H.; Drennan, Catherine L.; Baker, David; Ting, Alice Y.

    2014-10-13

    In this study, chemical fluorophores offer tremendous size and photophysical advantages over fluorescent proteins but are much more challenging to target to specific cellular proteins. Here, we used Rosetta-based computation to design a fluorophore ligase that accepts the red dye resorufin, starting from Escherichia coli lipoic acid ligase. X-ray crystallography showed that the design closely matched the experimental structure. Resorufin ligase catalyzed the site-specific and covalent attachment of resorufin to various cellular proteins genetically fused to a 13-aa recognition peptide in multiple mammalian cell lines and in primary cultured neurons. We used resorufin ligase to perform superresolution imaging of the intermediate filament protein vimentin by stimulated emission depletion and electron microscopies. This work illustrates the power of Rosetta for major redesign of enzyme specificity and introduces a tool for minimally invasive, highly specific imaging of cellular proteins by both conventional and superresolution microscopies.

  8. Computational design of a red fluorophore ligase for site-specific protein labeling in living cells

    DOE PAGES

    Liu, Daniel S.; Nivon, Lucas G.; Richter, Florian; ...

    2014-10-13

    In this study, chemical fluorophores offer tremendous size and photophysical advantages over fluorescent proteins but are much more challenging to target to specific cellular proteins. Here, we used Rosetta-based computation to design a fluorophore ligase that accepts the red dye resorufin, starting from Escherichia coli lipoic acid ligase. X-ray crystallography showed that the design closely matched the experimental structure. Resorufin ligase catalyzed the site-specific and covalent attachment of resorufin to various cellular proteins genetically fused to a 13-aa recognition peptide in multiple mammalian cell lines and in primary cultured neurons. We used resorufin ligase to perform superresolution imaging of themore » intermediate filament protein vimentin by stimulated emission depletion and electron microscopies. This work illustrates the power of Rosetta for major redesign of enzyme specificity and introduces a tool for minimally invasive, highly specific imaging of cellular proteins by both conventional and superresolution microscopies.« less

  9. Minimization of cogging torque in permanent magnet motors by teeth pairing and magnet arc design using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Eom, Jae-Boo; Hwang, Sang-Moon; Kim, Tae-Jong; Jeong, Weui-Bong; Kang, Beom-Soo

    2001-05-01

    Cogging torque is often a principal source of vibration and acoustic noise in high precision spindle motor applications. In this paper, cogging torque is analytically calculated using energy method with Fourier series expansion. It shows that cogging torque is effectively minimized by controlling airgap permeance function with teeth pairing design, and by controlling flux density function with magnet arc design. For an optimization technique, genetic algorithm is applied to handle trade-off effects of design parameters. Results show that the proposed method can reduce the cogging torque effectively.

  10. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun

    2013-04-01

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  11. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy.

    PubMed

    Tedgren, Åsa Carlsson; Carlsson, Gudrun Alm

    2013-04-21

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from (125)I, (169)Yb and (192)Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  12. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction.

  13. Method for Predicting the Energy Characteristics of Li-Ion Cells Designed for High Specific Energy

    NASA Technical Reports Server (NTRS)

    Bennett, William, R.

    2012-01-01

    Novel electrode materials with increased specific capacity and voltage performance are critical to the NASA goals for developing Li-ion batteries with increased specific energy and energy density. Although performance metrics of the individual electrodes are critically important, a fundamental understanding of the interactions of electrodes in a full cell is essential to achieving the desired performance, and for establishing meaningful goals for electrode performance in the first place. This paper presents design considerations for matching positive and negative electrodes in a viable design. Methods for predicting cell-level performance, based on laboratory data for individual electrodes, are presented and discussed.

  14. Mod-5A wind turbine generator program design report. Volume 4: Drawings and specifications, book 3

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The design, development and analysis of the 7.3 MW MOD-5A wind turbine generator is documented. This volume contains the drawings and specifications developed for the final design. This volume is divided into 5 books of which this is the third, containing drawings 47A380074 through 47A380126. A full breakdown parts listing is provided as well as a where used list.

  15. 48 CFR 52.246-19 - Warranty of Systems and Equipment under Performance Specifications or Design Criteria.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Equipment under Performance Specifications or Design Criteria. 52.246-19 Section 52.246-19 Federal... under Performance Specifications or Design Criteria. As prescribed in 46.710(c)(1), the contracting... Specifications or Design Criteria (MAY 2001) (a) Definitions. As used in this clause— Acceptance means the act...

  16. Design of a broadband electrical impedance matching network for piezoelectric ultrasound transducers based on a genetic algorithm.

    PubMed

    An, Jianfei; Song, Kezhu; Zhang, Shuangxi; Yang, Junfeng; Cao, Ping

    2014-04-16

    An improved method based on a genetic algorithm (GA) is developed to design a broadband electrical impedance matching network for piezoelectric ultrasound transducer. A key feature of the new method is that it can optimize both the topology of the matching network and perform optimization on the components. The main idea of this method is to find the optimal matching network in a set of candidate topologies. Some successful experiences of classical algorithms are absorbed to limit the size of the set of candidate topologies and greatly simplify the calculation process. Both binary-coded GA and real-coded GA are used for topology optimization and components optimization, respectively. Some calculation strategies, such as elitist strategy and clearing niche method, are adopted to make sure that the algorithm can converge to the global optimal result. Simulation and experimental results prove that matching networks with better performance might be achieved by this improved method.

  17. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  18. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    PubMed

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  19. Design of Optimal Treatments for Neuromusculoskeletal Disorders using Patient-Specific Multibody Dynamic Models

    PubMed Central

    Fregly, Benjamin J.

    2011-01-01

    Disorders of the human neuromusculoskeletal system such as osteoarthritis, stroke, cerebral palsy, and paraplegia significantly affect mobility and result in a decreased quality of life. Surgical and rehabilitation treatment planning for these disorders is based primarily on static anatomic measurements and dynamic functional measurements filtered through clinical experience. While this subjective treatment planning approach works well in many cases, it does not predict accurate functional outcome in many others. This paper presents a vision for how patient-specific multibody dynamic models can serve as the foundation for an objective treatment planning approach that identifies optimal treatments and treatment parameters on an individual patient basis. First, a computational paradigm is presented for constructing patient-specific multibody dynamic models. This paradigm involves a combination of patient-specific skeletal models, muscle-tendon models, neural control models, and articular contact models, with the complexity of the complete model being dictated by the requirements of the clinical problem being addressed. Next, three clinical applications are presented to illustrate how such models could be used in the treatment design process. One application involves the design of patient-specific gait modification strategies for knee osteoarthritis rehabilitation, a second involves the selection of optimal patient-specific surgical parameters for a particular knee osteoarthritis surgery, and the third involves the design of patient-specific muscle stimulation patterns for stroke rehabilitation. The paper concludes by discussing important challenges that need to be overcome to turn this vision into reality. PMID:21785529

  20. Design and optimization of pulsed Chemical Exchange Saturation Transfer MRI using a multiobjective genetic algorithm.

    PubMed

    Yoshimaru, Eriko S; Randtke, Edward A; Pagel, Mark D; Cárdenas-Rodríguez, Julio

    2016-02-01

    Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners.