Sample records for reverse engineering algorithm

  1. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    PubMed

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  2. A parallel implementation of the network identification by multiple regression (NIR) algorithm to reverse-engineer regulatory gene networks.

    PubMed

    Gregoretti, Francesco; Belcastro, Vincenzo; di Bernardo, Diego; Oliva, Gennaro

    2010-04-21

    The reverse engineering of gene regulatory networks using gene expression profile data has become crucial to gain novel biological knowledge. Large amounts of data that need to be analyzed are currently being produced due to advances in microarray technologies. Using current reverse engineering algorithms to analyze large data sets can be very computational-intensive. These emerging computational requirements can be met using parallel computing techniques. It has been shown that the Network Identification by multiple Regression (NIR) algorithm performs better than the other ready-to-use reverse engineering software. However it cannot be used with large networks with thousands of nodes--as is the case in biological networks--due to the high time and space complexity. In this work we overcome this limitation by designing and developing a parallel version of the NIR algorithm. The new implementation of the algorithm reaches a very good accuracy even for large gene networks, improving our understanding of the gene regulatory networks that is crucial for a wide range of biomedical applications.

  3. Reverse engineering of aircraft wing data using a partial differential equation surface model

    NASA Astrophysics Data System (ADS)

    Huband, Jacalyn Mann

    Reverse engineering is a multi-step process used in industry to determine a production representation of an existing physical object. This representation is in the form of mathematical equations that are compatible with computer-aided design and computer-aided manufacturing (CAD/CAM) equipment. The four basic steps to the reverse engineering process are data acquisition, data separation, surface or curve fitting, and CAD/CAM production. The surface fitting step determines the design representation of the object, and thus is critical to the success or failure of the reverse engineering process. Although surface fitting methods described in the literature are used to model a variety of surfaces, they are not suitable for reversing aircraft wings. In this dissertation, we develop and demonstrate a new strategy for reversing a mathematical representation of an aircraft wing. The basis of our strategy is to take an aircraft design model and determine if an inverse model can be derived. A candidate design model for this research is the partial differential equation (PDE) surface model, proposed by Bloor and Wilson and used in the Rapid Airplane Parameter Input Design (RAPID) tool at the NASA-LaRC Geolab. There are several basic mathematical problems involved in reversing the PDE surface model: (i) deriving a computational approximation of the surface function; (ii) determining a radial parametrization of the wing; (iii) choosing mathematical models or classes of functions for representation of the boundary functions; (iv) fitting the boundary data points by the chosen boundary functions; and (v) simultaneously solving for the axial parameterization and the derivative boundary functions. The study of the techniques to solve the above mathematical problems has culminated in a reverse PDE surface model and two reverse PDE surface algorithms. One reverse PDE surface algorithm recovers engineering design parameters for the RAPID tool from aircraft wing data and the other generates a PDE surface model with spline boundary functions from an arbitrary set of grid points. Our numerical tests show that the reverse PDE surface model and the reverse PDE surface algorithms can be used for the reverse engineering of aircraft wing data.

  4. Reverse engineering time discrete finite dynamical systems: a feasible undertaking?

    PubMed

    Delgado-Eckert, Edgar

    2009-01-01

    With the advent of high-throughput profiling methods, interest in reverse engineering the structure and dynamics of biochemical networks is high. Recently an algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is a top-down approach using time discrete dynamical systems. One of its key steps includes the choice of a term order, a technicality imposed by the use of Gröbner-bases calculations. The aim of this paper is to identify minimal requirements on data sets to be used with this algorithm and to characterize optimal data sets. We found minimal requirements on a data set based on how many terms the functions to be reverse engineered display. Furthermore, we identified optimal data sets, which we characterized using a geometric property called "general position". Moreover, we developed a constructive method to generate optimal data sets, provided a codimensional condition is fulfilled. In addition, we present a generalization of their algorithm that does not depend on the choice of a term order. For this method we derived a formula for the probability of finding the correct model, provided the data set used is optimal. We analyzed the asymptotic behavior of the probability formula for a growing number of variables n (i.e. interacting chemicals). Unfortunately, this formula converges to zero as fast as , where and . Therefore, even if an optimal data set is used and the restrictions in using term orders are overcome, the reverse engineering problem remains unfeasible, unless prodigious amounts of data are available. Such large data sets are experimentally impossible to generate with today's technologies.

  5. A gene network simulator to assess reverse engineering algorithms.

    PubMed

    Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2009-03-01

    In the context of reverse engineering of biological networks, simulators are helpful to test and compare the accuracy of different reverse-engineering approaches in a variety of experimental conditions. A novel gene-network simulator is presented that resembles some of the main features of transcriptional regulatory networks related to topology, interaction among regulators of transcription, and expression dynamics. The simulator generates network topology according to the current knowledge of biological network organization, including scale-free distribution of the connectivity and clustering coefficient independent of the number of nodes in the network. It uses fuzzy logic to represent interactions among the regulators of each gene, integrated with differential equations to generate continuous data, comparable to real data for variety and dynamic complexity. Finally, the simulator accounts for saturation in the response to regulation and transcription activation thresholds and shows robustness to perturbations. It therefore provides a reliable and versatile test bed for reverse engineering algorithms applied to microarray data. Since the simulator describes regulatory interactions and expression dynamics as two distinct, although interconnected aspects of regulation, it can also be used to test reverse engineering approaches that use both microarray and protein-protein interaction data in the process of learning. A first software release is available at http://www.dei.unipd.it/~dicamill/software/netsim as an R programming language package.

  6. GRASP/Ada: Graphical Representations of Algorithms, Structures, and Processes for Ada. The development of a program analysis environment for Ada: Reverse engineering tools for Ada, task 2, phase 3

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1991-01-01

    The main objective is the investigation, formulation, and generation of graphical representations of algorithms, structures, and processes for Ada (GRASP/Ada). The presented task, in which various graphical representations that can be extracted or generated from source code are described and categorized, is focused on reverse engineering. The following subject areas are covered: the system model; control structure diagram generator; object oriented design diagram generator; user interface; and the GRASP library.

  7. A Survey of Statistical Models for Reverse Engineering Gene Regulatory Networks

    PubMed Central

    Huang, Yufei; Tienda-Luna, Isabel M.; Wang, Yufeng

    2009-01-01

    Statistical models for reverse engineering gene regulatory networks are surveyed in this article. To provide readers with a system-level view of the modeling issues in this research, a graphical modeling framework is proposed. This framework serves as the scaffolding on which the review of different models can be systematically assembled. Based on the framework, we review many existing models for many aspects of gene regulation; the pros and cons of each model are discussed. In addition, network inference algorithms are also surveyed under the graphical modeling framework by the categories of point solutions and probabilistic solutions and the connections and differences among the algorithms are provided. This survey has the potential to elucidate the development and future of reverse engineering GRNs and bring statistical signal processing closer to the core of this research. PMID:20046885

  8. GRASP/Ada (Graphical Representations of Algorithms, Structures, and Processes for Ada): The development of a program analysis environment for Ada. Reverse engineering tools for Ada, task 1, phase 2

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1990-01-01

    The study, formulation, and generation of structures for Ada (GRASP/Ada) are discussed in this second phase report of a three phase effort. Various graphical representations that can be extracted or generated from source code are described and categorized with focus on reverse engineering. The overall goal is to provide the foundation for a CASE (computer-aided software design) environment in which reverse engineering and forward engineering (development) are tightly coupled. Emphasis is on a subset of architectural diagrams that can be generated automatically from source code with the control structure diagram (CSD) included for completeness.

  9. Reverse engineering a gene network using an asynchronous parallel evolution strategy

    PubMed Central

    2010-01-01

    Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855

  10. Reverse engineering and analysis of large genome-scale gene networks

    PubMed Central

    Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas

    2013-01-01

    Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oler, Kiri J.; Miller, Carl H.

    In this paper, we present a methodology for reverse engineering integrated circuits, including a mathematical verification of a scalable algorithm used to generate minimal finite state machine representations of integrated circuits.

  12. Reverse Engineering Validation using a Benchmark Synthetic Gene Circuit in Human Cells

    PubMed Central

    Kang, Taek; White, Jacob T.; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas

    2013-01-01

    Multi-component biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network. PMID:23654266

  13. Reverse engineering validation using a benchmark synthetic gene circuit in human cells.

    PubMed

    Kang, Taek; White, Jacob T; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas

    2013-05-17

    Multicomponent biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network.

  14. Reconstruction of metabolic networks from high-throughput metabolite profiling data: in silico analysis of red blood cell metabolism.

    PubMed

    Nemenman, Ilya; Escola, G Sean; Hlavacek, William S; Unkefer, Pat J; Unkefer, Clifford J; Wall, Michael E

    2007-12-01

    We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For benchmarking purposes, we generate synthetic metabolic profiles based on a well-established model for red blood cell metabolism. A variety of data sets are generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and temporal dynamics. These data sets are made available online. We use ARACNE, a mainstream algorithm for reverse engineering of transcriptional regulatory networks from gene expression data, to predict metabolic interactions from these data sets. We find that the performance of ARACNE on metabolic data is comparable to that on gene expression data.

  15. Variable neighborhood search for reverse engineering of gene regulatory networks.

    PubMed

    Nicholson, Charles; Goodwin, Leslie; Clark, Corey

    2017-01-01

    A new search heuristic, Divided Neighborhood Exploration Search, designed to be used with inference algorithms such as Bayesian networks to improve on the reverse engineering of gene regulatory networks is presented. The approach systematically moves through the search space to find topologies representative of gene regulatory networks that are more likely to explain microarray data. In empirical testing it is demonstrated that the novel method is superior to the widely employed greedy search techniques in both the quality of the inferred networks and computational time. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.

    PubMed

    Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing

    2017-01-01

    Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.

  17. The algorithm of central axis in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang

    2017-09-01

    Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.

  18. Software engineering capability for Ada (GRASP/Ada Tool)

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1995-01-01

    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) has successfully created and prototyped a new algorithmic level graphical representation for Ada software, the Control Structure Diagram (CSD). The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis has been on the automatic generation of the CSD from Ada PDL or source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional prettyprinted Ada Source code. A new Motif compliant graphical user interface has been developed for the GRASP/Ada prototype.

  19. Update of GRASP/Ada reverse engineering tools for Ada

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1992-01-01

    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) has successfully created and prototyped a new algorithmic level graphical representation of Ada software, the Control Structure Diagram (CSD). The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis was on the automatic generation of the CSD from Ada PDL or source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional prettyprinted Ada source code. In Phase 1 of the GRASP/Ada project, the CSD graphical constructs were created and applied manually to several small Ada programs. A prototype (Version 1) was designed and implemented using FLEX and BISON running under VMS on a VAS 11-780. In Phase 2, the prototype was improved and ported to the Sun 4 platform under UNIX. A user interface was designed and partially implemented using the HP widget toolkit and the X Windows System. In Phase 3, the user interface was extensively reworked using the Athena widget toolkit and X Windows. The prototype was applied successfully to numerous Ada programs ranging in size from several hundred to several thousand lines of source code. Following Phase 3, the prototype was evaluated by software engineering students at Auburn University and then updated with significant enhancements to the user interface including editing capabilities. Version 3.2 of the prototype was prepared for limited distribution to facilitate further evaluation. The current prototype provides the capability for the user to generate CSD's from Ada PDL or source code in a reverse engineering as well as forward engineering mode with a level of flexibility suitable for practical application.

  20. The centroidal algorithm in molecular similarity and diversity calculations on confidential datasets.

    PubMed

    Trepalin, Sergey; Osadchiy, Nikolay

    2005-01-01

    Chemical structure provides exhaustive description of a compound, but it is often proprietary and thus an impediment in the exchange of information. For example, structure disclosure is often needed for the selection of most similar or dissimilar compounds. Authors propose a centroidal algorithm based on structural fragments (screens) that can be efficiently used for the similarity and diversity selections without disclosing structures from the reference set. For an increased security purposes, authors recommend that such set contains at least some tens of structures. Analysis of reverse engineering feasibility showed that the problem difficulty grows with decrease of the screen's radius. The algorithm is illustrated with concrete calculations on known steroidal, quinoline, and quinazoline drugs. We also investigate a problem of scaffold identification in combinatorial library dataset. The results show that relatively small screens of radius equal to 2 bond lengths perform well in the similarity sorting, while radius 4 screens yield better results in diversity sorting. The software implementation of the algorithm taking SDF file with a reference set generates screens of various radii which are subsequently used for the similarity and diversity sorting of external SDFs. Since the reverse engineering of the reference set molecules from their screens has the same difficulty as the RSA asymmetric encryption algorithm, generated screens can be stored openly without further encryption. This approach ensures an end user transfers only a set of structural fragments and no other data. Like other algorithms of encryption, the centroid algorithm cannot give 100% guarantee of protecting a chemical structure from dataset, but probability of initial structure identification is very small-order of 10(-40) in typical cases.

  1. The centroidal algorithm in molecular similarity and diversity calculations on confidential datasets

    NASA Astrophysics Data System (ADS)

    Trepalin, Sergey; Osadchiy, Nikolay

    2005-09-01

    Chemical structure provides exhaustive description of a compound, but it is often proprietary and thus an impediment in the exchange of information. For example, structure disclosure is often needed for the selection of most similar or dissimilar compounds. Authors propose a centroidal algorithm based on structural fragments (screens) that can be efficiently used for the similarity and diversity selections without disclosing structures from the reference set. For an increased security purposes, authors recommend that such set contains at least some tens of structures. Analysis of reverse engineering feasibility showed that the problem difficulty grows with decrease of the screen's radius. The algorithm is illustrated with concrete calculations on known steroidal, quinoline, and quinazoline drugs. We also investigate a problem of scaffold identification in combinatorial library dataset. The results show that relatively small screens of radius equal to 2 bond lengths perform well in the similarity sorting, while radius 4 screens yield better results in diversity sorting. The software implementation of the algorithm taking SDF file with a reference set generates screens of various radii which are subsequently used for the similarity and diversity sorting of external SDFs. Since the reverse engineering of the reference set molecules from their screens has the same difficulty as the RSA asymmetric encryption algorithm, generated screens can be stored openly without further encryption. This approach ensures an end user transfers only a set of structural fragments and no other data. Like other algorithms of encryption, the centroid algorithm cannot give 100% guarantee of protecting a chemical structure from dataset, but probability of initial structure identification is very small-order of 10-40 in typical cases.

  2. Acoustic Longitudinal Field NIF Optic Feature Detection Map Using Time-Reversal & MUSIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S K

    2006-02-09

    We developed an ultrasonic longitudinal field time-reversal and MUltiple SIgnal Classification (MUSIC) based detection algorithm for identifying and mapping flaws in fused silica NIF optics. The algorithm requires a fully multistatic data set, that is one with multiple, independently operated, spatially diverse transducers, each transmitter of which, in succession, launches a pulse into the optic and the scattered signal measured and recorded at every receiver. We have successfully localized engineered ''defects'' larger than 1 mm in an optic. We confirmed detection and localization of 3 mm and 5 mm features in experimental data, and a 0.5 mm in simulated datamore » with sufficiently high signal-to-noise ratio. We present the theory, experimental results, and simulated results.« less

  3. The development of a program analysis environment for Ada: Reverse engineering tools for Ada

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1991-01-01

    The Graphical Representations of Algorithms, Structures, and Processes for Ada (GRASP/Ada) has successfully created and prototyped a new algorithm level graphical representation for Ada software, the Control Structure Diagram (CSD). The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and thus improve reliability and reduce costs. The emphasis was on the automatic generation of the CSD from Ada source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional prettyprinted Ada source code. In Phase 1 of the GRASP/Ada project, the CSD graphical constructs were created and applied manually to several small Ada programs. A prototype (Version 1) was designed and implemented using FLEX and BISON running under the Virtual Memory System (VMS) on a VAX 11-780. In Phase 2, the prototype was improved and ported to the Sun 4 platform under UNIX. A user interface was designed and partially implemented. The prototype was applied successfully to numerous Ada programs ranging in size from several hundred to several thousand lines of source code. In Phase 3 of the project, the prototype was prepared for limited distribution (GRASP/Ada Version 3.0) to facilitate evaluation. The user interface was extensively reworked. The current prototype provides the capability for the user to generate CSD from Ada source code in a reverse engineering mode with a level of flexibility suitable for practical application.

  4. MORE: mixed optimization for reverse engineering--an application to modeling biological networks response via sparse systems of nonlinear differential equations.

    PubMed

    Sambo, Francesco; de Oca, Marco A Montes; Di Camillo, Barbara; Toffolo, Gianna; Stützle, Thomas

    2012-01-01

    Reverse engineering is the problem of inferring the structure of a network of interactions between biological variables from a set of observations. In this paper, we propose an optimization algorithm, called MORE, for the reverse engineering of biological networks from time series data. The model inferred by MORE is a sparse system of nonlinear differential equations, complex enough to realistically describe the dynamics of a biological system. MORE tackles separately the discrete component of the problem, the determination of the biological network topology, and the continuous component of the problem, the strength of the interactions. This approach allows us both to enforce system sparsity, by globally constraining the number of edges, and to integrate a priori information about the structure of the underlying interaction network. Experimental results on simulated and real-world networks show that the mixed discrete/continuous optimization approach of MORE significantly outperforms standard continuous optimization and that MORE is competitive with the state of the art in terms of accuracy of the inferred networks.

  5. Biomimetic robots using EAP as artificial muscles - progress and challenges

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph

    2004-01-01

    Biology offers a great model for emulation in areas ranging from tools, computational algorithms, materials science, mechanisms and information technology. In recent years, the field of biomimetics, namely mimicking biology, has blossomed with significant advances enabling the reverse engineering of many animals' functions and implementation of some of these capabilities.

  6. Update of GRASP/Ada reverse engineering tools for Ada

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1993-01-01

    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) successfully created and prototyped a new algorithmic level graphical representation for Ada software, the Control Structure Diagram (CSD). The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis was on the automatic generation of the CSD from Ada PDL or source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional pretty printed Ada source code. In Phase 1 of the GRASP/Ada project, the CSD graphical constructs were created and applied manually to several small Ada programs. A prototype CSD generator (Version 1) was designed and implemented using FLEX and BISON running under VMS on a VAX 11-780. In Phase 2, the prototype was improved and ported to the Sun 4 platform under UNIX. A user interface was designed and partially implemented using the HP widget toolkit and the X Windows System. In Phase 3, the user interface was extensively reworked using the Athena widget toolkit and X Windows. The prototype was applied successfully to numerous Ada programs ranging in size from several hundred to several thousand lines of source code. Following Phase 3,e two update phases were completed. Update'92 focused on the initial analysis of evaluation data collected from software engineering students at Auburn University and the addition of significant enhancements to the user interface. Update'93 (the current update) focused on the statistical analysis of the data collected in the previous update and preparation of Version 3.4 of the prototype for limited distribution to facilitate further evaluation. The current prototype provides the capability for the user to generate CSD's from Ada PDL or source code in a reverse engineering as well as forward engineering mode with a level of flexibility suitable for practical application. An overview of the GRASP/Ada project with an emphasis on the current update is provided.

  7. Optimal Fungal Space Searching Algorithms.

    PubMed

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  8. Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan; Fuhrman, Stefanie; Somogyi, Roland

    1998-01-01

    Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.

  9. File Carving and Malware Identification Algorithms Applied to Firmware Reverse Engineering

    DTIC Science & Technology

    2013-03-21

    33 3.5 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 Experimental...consider a byte value rate-of-change frequency metric [32]. Their system calculates the absolute value of the distance between all consecutive bytes, then...the rate-of-change means and standard deviations. Karresand and Shahmehri use the same distance metric for both byte value frequency and rate-of-change

  10. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    PubMed

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  11. GRASP/Ada 95: Reverse Engineering Tools for Ada

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1996-01-01

    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) has successfully created and prototyped an algorithmic level graphical representation for Ada software, the Control Structure Diagram (CSD), and a new visualization for a fine-grained complexity metric called the Complexity Profile Graph (CPG). By synchronizing the CSD and the CPG, the CSD view of control structure, nesting, and source code is directly linked to the corresponding visualization of statement level complexity in the CPG. GRASP has been integrated with GNAT, the GNU Ada 95 Translator to provide a comprehensive graphical user interface and development environment for Ada 95. The user may view, edit, print, and compile source code as a CSD with no discernible addition to storage or computational overhead. The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis has been on the automatic generation of the CSD from Ada 95 source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional prettyprinted Ada source code. The current update has focused on the design and implementation of a new Motif compliant user interface, and a new CSD generator consisting of a tagger and renderer. The Complexity Profile Graph (CPG) is based on a set of functions that describes the context, content, and the scaling for complexity on a statement by statement basis. When combined graphicafly, the result is a composite profile of complexity for the program unit. Ongoing research includes the development and refinement of the associated functions, and the development of the CPG generator prototype. The current Version 5.0 prototype provides the capability for the user to generate CSDs and CPGs from Ada 95 source code in a reverse engineering as well as forward engineering mode with a level of flexibility suitable for practical application. This report provides an overview of the GRASP/Ada project with an emphasis on the current update.

  12. Reverse engineering highlights potential principles of large gene regulatory network design and learning.

    PubMed

    Carré, Clément; Mas, André; Krouk, Gabriel

    2017-01-01

    Inferring transcriptional gene regulatory networks from transcriptomic datasets is a key challenge of systems biology, with potential impacts ranging from medicine to agronomy. There are several techniques used presently to experimentally assay transcription factors to target relationships, defining important information about real gene regulatory networks connections. These techniques include classical ChIP-seq, yeast one-hybrid, or more recently, DAP-seq or target technologies. These techniques are usually used to validate algorithm predictions. Here, we developed a reverse engineering approach based on mathematical and computer simulation to evaluate the impact that this prior knowledge on gene regulatory networks may have on training machine learning algorithms. First, we developed a gene regulatory networks-simulating engine called FRANK (Fast Randomizing Algorithm for Network Knowledge) that is able to simulate large gene regulatory networks (containing 10 4 genes) with characteristics of gene regulatory networks observed in vivo. FRANK also generates stable or oscillatory gene expression directly produced by the simulated gene regulatory networks. The development of FRANK leads to important general conclusions concerning the design of large and stable gene regulatory networks harboring scale free properties (built ex nihilo). In combination with supervised (accepting prior knowledge) support vector machine algorithm we (i) address biologically oriented questions concerning our capacity to accurately reconstruct gene regulatory networks and in particular we demonstrate that prior-knowledge structure is crucial for accurate learning, and (ii) draw conclusions to inform experimental design to performed learning able to solve gene regulatory networks in the future. By demonstrating that our predictions concerning the influence of the prior-knowledge structure on support vector machine learning capacity holds true on real data ( Escherichia coli K14 network reconstruction using network and transcriptomic data), we show that the formalism used to build FRANK can to some extent be a reasonable model for gene regulatory networks in real cells.

  13. Extension of an iterative closest point algorithm for simultaneous localization and mapping in corridor environments

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua

    2016-03-01

    Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.

  14. Mass Conservation and Inference of Metabolic Networks from High-Throughput Mass Spectrometry Data

    PubMed Central

    Bandaru, Pradeep; Bansal, Mukesh

    2011-01-01

    Abstract We present a step towards the metabolome-wide computational inference of cellular metabolic reaction networks from metabolic profiling data, such as mass spectrometry. The reconstruction is based on identification of irreducible statistical interactions among the metabolite activities using the ARACNE reverse-engineering algorithm and on constraining possible metabolic transformations to satisfy the conservation of mass. The resulting algorithms are validated on synthetic data from an abridged computational model of Escherichia coli metabolism. Precision rates upwards of 50% are routinely observed for identification of full metabolic reactions, and recalls upwards of 20% are also seen. PMID:21314454

  15. The tradition algorithm approach underestimates the prevalence of serodiagnosis of syphilis in HIV-infected individuals.

    PubMed

    Chen, Bin; Peng, Xiuming; Xie, Tiansheng; Jin, Changzhong; Liu, Fumin; Wu, Nanping

    2017-07-01

    Currently, there are three algorithms for screening of syphilis: traditional algorithm, reverse algorithm and European Centre for Disease Prevention and Control (ECDC) algorithm. To date, there is not a generally recognized diagnostic algorithm. When syphilis meets HIV, the situation is even more complex. To evaluate their screening performance and impact on the seroprevalence of syphilis in HIV-infected individuals, we conducted a cross-sectional study included 865 serum samples from HIV-infected patients in a tertiary hospital. Every sample (one per patient) was tested with toluidine red unheated serum test (TRUST), T. pallidum particle agglutination assay (TPPA), and Treponema pallidum enzyme immunoassay (TP-EIA) according to the manufacturer's instructions. The results of syphilis serological testing were interpreted following different algorithms respectively. We directly compared the traditional syphilis screening algorithm with the reverse syphilis screening algorithm in this unique population. The reverse algorithm achieved remarkable higher seroprevalence of syphilis than the traditional algorithm (24.9% vs. 14.2%, p < 0.0001). Compared to the reverse algorithm, the traditional algorithm also had a missed serodiagnosis rate of 42.8%. The total percentages of agreement and corresponding kappa values of tradition and ECDC algorithm compared with those of reverse algorithm were as follows: 89.4%,0.668; 99.8%, 0.994. There was a very good strength of agreement between the reverse and the ECDC algorithm. Our results supported the reverse (or ECDC) algorithm in screening of syphilis in HIV-infected populations. In addition, our study demonstrated that screening of HIV-populations using different algorithms may result in a statistically different seroprevalence of syphilis.

  16. Statistical Inference and Reverse Engineering of Gene Regulatory Networks from Observational Expression Data

    PubMed Central

    Emmert-Streib, Frank; Glazko, Galina V.; Altay, Gökmen; de Matos Simoes, Ricardo

    2012-01-01

    In this paper, we present a systematic and conceptual overview of methods for inferring gene regulatory networks from observational gene expression data. Further, we discuss two classic approaches to infer causal structures and compare them with contemporary methods by providing a conceptual categorization thereof. We complement the above by surveying global and local evaluation measures for assessing the performance of inference algorithms. PMID:22408642

  17. Evolutionary Algorithm Based Automated Reverse Engineering and Defect Discovery

    DTIC Science & Technology

    2007-09-21

    a previous application of a GP as a data mining function to evolve fuzzy decision trees symbolically [3-5], the terminal set consisted of fuzzy...of input and output information is required. In the case of fuzzy decision trees, the database represented a collection of scenarios about which the...fuzzy decision tree to be evolved would make decisions . The database also had entries created by experts representing decisions about the scenarios

  18. Reverse-engineering of gene networks for regulating early blood development from single-cell measurements.

    PubMed

    Wei, Jiangyong; Hu, Xiaohua; Zou, Xiufen; Tian, Tianhai

    2017-12-28

    Recent advances in omics technologies have raised great opportunities to study large-scale regulatory networks inside the cell. In addition, single-cell experiments have measured the gene and protein activities in a large number of cells under the same experimental conditions. However, a significant challenge in computational biology and bioinformatics is how to derive quantitative information from the single-cell observations and how to develop sophisticated mathematical models to describe the dynamic properties of regulatory networks using the derived quantitative information. This work designs an integrated approach to reverse-engineer gene networks for regulating early blood development based on singel-cell experimental observations. The wanderlust algorithm is initially used to develop the pseudo-trajectory for the activities of a number of genes. Since the gene expression data in the developed pseudo-trajectory show large fluctuations, we then use Gaussian process regression methods to smooth the gene express data in order to obtain pseudo-trajectories with much less fluctuations. The proposed integrated framework consists of both bioinformatics algorithms to reconstruct the regulatory network and mathematical models using differential equations to describe the dynamics of gene expression. The developed approach is applied to study the network regulating early blood cell development. A graphic model is constructed for a regulatory network with forty genes and a dynamic model using differential equations is developed for a network of nine genes. Numerical results suggests that the proposed model is able to match experimental data very well. We also examine the networks with more regulatory relations and numerical results show that more regulations may exist. We test the possibility of auto-regulation but numerical simulations do not support the positive auto-regulation. In addition, robustness is used as an importantly additional criterion to select candidate networks. The research results in this work shows that the developed approach is an efficient and effective method to reverse-engineer gene networks using single-cell experimental observations.

  19. Transmission mode acoustic time-reversal imaging for nondestructive evaluation

    NASA Astrophysics Data System (ADS)

    Lehman, Sean K.; Devaney, Anthony J.

    2002-11-01

    In previous ASA meetings and JASA papers, the extended and formalized theory of transmission mode time reversal in which the transceivers are noncoincident was presented. When combined with the subspace concepts of a generalized MUltiple SIgnal Classification (MUSIC) algorithm, this theory is used to form super-resolution images of scatterers buried in a medium. These techniques are now applied to ultrasonic nondestructive evaluation (NDE) of parts, and shallow subsurface seismic imaging. Results are presented of NDE experiments on metal and epoxy blocks using data collected from an adaptive ultrasonic array, that is, a ''time-reversal machine,'' at Lawrence Livermore National Laboratory. Also presented are the results of seismo-acoustic subsurface probing of buried hazardous waste pits at the Idaho National Engineering and Environmental Laboratory. [Work performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.] [Work supported in part by CenSSIS, the Center for Subsurface Sensing and Imaging Systems, under the Engineering Research Centers Program of the NSF (award number EEC-9986821) as well as from Air Force Contracts No. F41624-99-D6002 and No. F49620-99-C0013.

  20. Intelligent Engine Systems: Adaptive Control

    NASA Technical Reports Server (NTRS)

    Gibson, Nathan

    2008-01-01

    We have studied the application of the baseline Model Predictive Control (MPC) algorithm to the control of main fuel flow rate (WF36), variable bleed valve (AE24) and variable stator vane (STP25) control of a simulated high-bypass turbofan engine. Using reference trajectories for thrust and turbine inlet temperature (T41) generated by a simulated new engine, we have examined MPC for tracking these two reference outputs while controlling a deteriorated engine. We have examined the results of MPC control for six different transients: two idle-to-takeoff transients at sea level static (SLS) conditions, one takeoff-to-idle transient at SLS, a Bode power command and reverse Bode power command at 20,000 ft/Mach 0.5, and a reverse Bode transient at 35,000 ft/Mach 0.84. For all cases, our primary focus was on the computational effort required by MPC for varying MPC update rates, control horizons, and prediction horizons. We have also considered the effects of these MPC parameters on the performance of the control, with special emphasis on the thrust tracking error, the peak T41, and the sizes of violations of the constraints on the problem, primarily the booster stall margin limit, which for most cases is the lone constraint that is violated with any frequency.

  1. Genetic Network Inference: From Co-Expression Clustering to Reverse Engineering

    NASA Technical Reports Server (NTRS)

    Dhaeseleer, Patrik; Liang, Shoudan; Somogyi, Roland

    2000-01-01

    Advances in molecular biological, analytical, and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using high-throughput gene expression assays, we are able to measure the output of the gene regulatory network. We aim here to review datamining and modeling approaches for conceptualizing and unraveling the functional relationships implicit in these datasets. Clustering of co-expression profiles allows us to infer shared regulatory inputs and functional pathways. We discuss various aspects of clustering, ranging from distance measures to clustering algorithms and multiple-duster memberships. More advanced analysis aims to infer causal connections between genes directly, i.e., who is regulating whom and how. We discuss several approaches to the problem of reverse engineering of genetic networks, from discrete Boolean networks, to continuous linear and non-linear models. We conclude that the combination of predictive modeling with systematic experimental verification will be required to gain a deeper insight into living organisms, therapeutic targeting, and bioengineering.

  2. ARACNe-AP: Gene Network Reverse Engineering through Adaptive Partitioning inference of Mutual Information. | Office of Cancer Genomics

    Cancer.gov

    The accurate reconstruction of gene regulatory networks from large scale molecular profile datasets represents one of the grand challenges of Systems Biology. The Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNe) represents one of the most effective tools to accomplish this goal. However, the initial Fixed Bandwidth (FB) implementation is both inefficient and unable to deal with sample sets providing largely uneven coverage of the probability density space.

  3. Protein Engineering for Nicotinamide Coenzyme Specificity in Oxidoreductases: Attempts and Challenges.

    PubMed

    Chánique, Andrea M; Parra, Loreto P

    2018-01-01

    Oxidoreductases are ubiquitous enzymes that catalyze an extensive range of chemical reactions with great specificity, efficiency, and selectivity. Most oxidoreductases are nicotinamide cofactor-dependent enzymes with a strong preference for NADP or NAD. Because these coenzymes differ in stability, bioavailability and costs, the enzyme preference for a specific coenzyme is an important issue for practical applications. Different approaches for the manipulation of coenzyme specificity have been reported, with different degrees of success. Here we present various attempts for the switching of nicotinamide coenzyme preference in oxidoreductases by protein engineering. This review covers 103 enzyme engineering studies from 82 articles and evaluates the accomplishments in terms of coenzyme specificity and catalytic efficiency compared to wild type enzymes of different classes. We analyzed different protein engineering strategies and related them with the degree of success in inverting the cofactor specificity. In general, catalytic activity is compromised when coenzyme specificity is reversed, however when switching from NAD to NADP, better results are obtained. In most of the cases, rational strategies were used, predominantly with loop exchange generating the best results. In general, the tendency of removing acidic residues and incorporating basic residues is the strategy of choice when trying to change specificity from NAD to NADP, and vice versa . Computational strategies and algorithms are also covered as helpful tools to guide protein engineering strategies. This mini review aims to give a general introduction to the topic, giving an overview of tools and information to work in protein engineering for the reversal of coenzyme specificity.

  4. Reverse engineering gene regulatory networks from measurement with missing values.

    PubMed

    Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong

    2016-12-01

    Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.

  5. Comparison of Traditional and Reverse Syphilis Screening Algorithms in Medical Health Checkups.

    PubMed

    Nah, Eun Hee; Cho, Seon; Kim, Suyoung; Cho, Han Ik; Chai, Jong Yil

    2017-11-01

    The syphilis diagnostic algorithms applied in different countries vary significantly depending on the local syphilis epidemiology and other considerations, including the expected workload, the need for automation in the laboratory and budget factors. This study was performed to investigate the efficacy of traditional and reverse syphilis diagnostic algorithms during general health checkups. In total, 1,000 blood specimens were obtained from 908 men and 92 women during their regular health checkups. Traditional screening and reverse screening were applied to the same specimens using automatic rapid plasma regain (RPR) and Treponema pallidum latex agglutination (TPLA) tests, respectively. Specimens that were reverse algorithm (TPLA) reactive, were subjected to a second treponemal test performed by using the chemiluminescent microparticle immunoassay (CMIA). Of the 1,000 specimens tested, 68 (6.8%) were reactive by reverse screening (TPLA) compared with 11 (1.1%) by traditional screening (RPR). The traditional algorithm failed to detect 48 specimens [TPLA(+)/RPR(-)/CMIA(+)]. The median TPLA cutoff index (COI) was higher in CMIA-reactive cases than in CMIA-nonreactive cases (90.5 vs 12.5 U). The reverse screening algorithm could detect the subjects with possible latent syphilis who were not detected by the traditional algorithm. Those individuals could be provided with opportunities for evaluating syphilis during their health checkups. The COI values of the initial TPLA test may be helpful in excluding false-positive TPLA test results in the reverse algorithm. © The Korean Society for Laboratory Medicine

  6. 14 CFR 25.934 - Turbojet engine thrust reverser system tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Turbojet engine thrust reverser system... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Powerplant General § 25.934 Turbojet engine thrust reverser system tests. Thrust reversers installed on turbojet engines must meet the...

  7. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    PubMed

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  8. Protein Engineering for Nicotinamide Coenzyme Specificity in Oxidoreductases: Attempts and Challenges

    PubMed Central

    Chánique, Andrea M.; Parra, Loreto P.

    2018-01-01

    Oxidoreductases are ubiquitous enzymes that catalyze an extensive range of chemical reactions with great specificity, efficiency, and selectivity. Most oxidoreductases are nicotinamide cofactor-dependent enzymes with a strong preference for NADP or NAD. Because these coenzymes differ in stability, bioavailability and costs, the enzyme preference for a specific coenzyme is an important issue for practical applications. Different approaches for the manipulation of coenzyme specificity have been reported, with different degrees of success. Here we present various attempts for the switching of nicotinamide coenzyme preference in oxidoreductases by protein engineering. This review covers 103 enzyme engineering studies from 82 articles and evaluates the accomplishments in terms of coenzyme specificity and catalytic efficiency compared to wild type enzymes of different classes. We analyzed different protein engineering strategies and related them with the degree of success in inverting the cofactor specificity. In general, catalytic activity is compromised when coenzyme specificity is reversed, however when switching from NAD to NADP, better results are obtained. In most of the cases, rational strategies were used, predominantly with loop exchange generating the best results. In general, the tendency of removing acidic residues and incorporating basic residues is the strategy of choice when trying to change specificity from NAD to NADP, and vice versa. Computational strategies and algorithms are also covered as helpful tools to guide protein engineering strategies. This mini review aims to give a general introduction to the topic, giving an overview of tools and information to work in protein engineering for the reversal of coenzyme specificity. PMID:29491854

  9. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  10. Learning Biological Networks via Bootstrapping with Optimized GO-based Gene Similarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Ronald C.; Sanfilippo, Antonio P.; McDermott, Jason E.

    2010-08-02

    Microarray gene expression data provide a unique information resource for learning biological networks using "reverse engineering" methods. However, there are a variety of cases in which we know which genes are involved in a given pathology of interest, but we do not have enough experimental evidence to support the use of fully-supervised/reverse-engineering learning methods. In this paper, we explore a novel semi-supervised approach in which biological networks are learned from a reference list of genes and a partial set of links for these genes extracted automatically from PubMed abstracts, using a knowledge-driven bootstrapping algorithm. We show how new relevant linksmore » across genes can be iteratively derived using a gene similarity measure based on the Gene Ontology that is optimized on the input network at each iteration. We describe an application of this approach to the TGFB pathway as a case study and show how the ensuing results prove the feasibility of the approach as an alternate or complementary technique to fully supervised methods.« less

  11. Reverse Engineering a Signaling Network Using Alternative Inputs

    PubMed Central

    Tanaka, Hiromasa; Yi, Tau-Mu

    2009-01-01

    One of the goals of systems biology is to reverse engineer in a comprehensive fashion the arrow diagrams of signal transduction systems. An important tool for ordering pathway components is genetic epistasis analysis, and here we present a strategy termed Alternative Inputs (AIs) to perform systematic epistasis analysis. An alternative input is defined as any genetic manipulation that can activate the signaling pathway instead of the natural input. We introduced the concept of an “AIs-Deletions matrix” that summarizes the outputs of all combinations of alternative inputs and deletions. We developed the theory and algorithms to construct a pairwise relationship graph from the AIs-Deletions matrix capturing both functional ordering (upstream, downstream) and logical relationships (AND, OR), and then interpreting these relationships into a standard arrow diagram. As a proof-of-principle, we applied this methodology to a subset of genes involved in yeast mating signaling. This experimental pilot study highlights the robustness of the approach and important technical challenges. In summary, this research formalizes and extends classical epistasis analysis from linear pathways to more complex networks, facilitating computational analysis and reconstruction of signaling arrow diagrams. PMID:19898612

  12. Sampling solution traces for the problem of sorting permutations by signed reversals

    PubMed Central

    2012-01-01

    Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results show that, for testable-sized permutations, the algorithms DFALT and SWA produce distributions which approximate the reversal length distributions observed with a complete enumeration of the set of traces. PMID:22704580

  13. Reverse time migration: A seismic processing application on the connection machine

    NASA Technical Reports Server (NTRS)

    Fiebrich, Rolf-Dieter

    1987-01-01

    The implementation of a reverse time migration algorithm on the Connection Machine, a massively parallel computer is described. Essential architectural features of this machine as well as programming concepts are presented. The data structures and parallel operations for the implementation of the reverse time migration algorithm are described. The algorithm matches the Connection Machine architecture closely and executes almost at the peak performance of this machine.

  14. Prevalence of Traditional and Reverse-Algorithm Syphilis Screening in Laboratory Practice: A Survey of Participants in the College of American Pathologists Syphilis Serology Proficiency Testing Program.

    PubMed

    Rhoads, Daniel D; Genzen, Jonathan R; Bashleben, Christine P; Faix, James D; Ansari, M Qasim

    2017-01-01

    -Syphilis serology screening in laboratory practice is evolving. Traditionally, the syphilis screening algorithm begins with a nontreponemal immunoassay, which is manually performed by a laboratory technologist. In contrast, the reverse algorithm begins with a treponemal immunoassay, which can be automated. The Centers for Disease Control and Prevention has recognized both approaches, but little is known about the current state of laboratory practice, which could impact test utilization and interpretation. -To assess the current state of laboratory practice for syphilis serologic screening. -In August 2015, a voluntary questionnaire was sent to the 2360 laboratories that subscribe to the College of American Pathologists syphilis serology proficiency survey. -Of the laboratories surveyed, 98% (2316 of 2360) returned the questionnaire, and about 83% (1911 of 2316) responded to at least some questions. Twenty-eight percent (378 of 1364) reported revision of their syphilis screening algorithm within the past 2 years, and 9% (170 of 1905) of laboratories anticipated changing their screening algorithm in the coming year. Sixty-three percent (1205 of 1911) reported using the traditional algorithm, 16% (304 of 1911) reported using the reverse algorithm, and 2.5% (47 of 1911) reported using both algorithms, whereas 9% (169 of 1911) reported not performing a reflex confirmation test. Of those performing the reverse algorithm, 74% (282 of 380) implemented a new testing platform when introducing the new algorithm. -The majority of laboratories still perform the traditional algorithm, but a significant minority have implemented the reverse-screening algorithm. Although the nontreponemal immunologic response typically wanes after cure and becomes undetectable, treponemal immunoassays typically remain positive for life, and it is important for laboratorians and clinicians to consider these assay differences when implementing, using, and interpreting serologic syphilis screening algorithms.

  15. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  16. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  17. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    PubMed

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  18. Integration of On-Line and Off-Line Diagnostic Algorithms for Aircraft Engine Health Management

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2007-01-01

    This paper investigates the integration of on-line and off-line diagnostic algorithms for aircraft gas turbine engines. The on-line diagnostic algorithm is designed for in-flight fault detection. It continuously monitors engine outputs for anomalous signatures induced by faults. The off-line diagnostic algorithm is designed to track engine health degradation over the lifetime of an engine. It estimates engine health degradation periodically over the course of the engine s life. The estimate generated by the off-line algorithm is used to update the on-line algorithm. Through this integration, the on-line algorithm becomes aware of engine health degradation, and its effectiveness to detect faults can be maintained while the engine continues to degrade. The benefit of this integration is investigated in a simulation environment using a nonlinear engine model.

  19. 14 CFR 23.934 - Turbojet and turbofan engine thrust reverser systems tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Turbojet and turbofan engine thrust... CATEGORY AIRPLANES Powerplant General § 23.934 Turbojet and turbofan engine thrust reverser systems tests. Thrust reverser systems of turbojet or turbofan engines must meet the requirements of § 33.97 of this...

  20. 14 CFR 23.1155 - Turbine engine reverse thrust and propeller pitch settings below the flight regime.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Turbine engine reverse thrust and propeller... COMMUTER CATEGORY AIRPLANES Powerplant Powerplant Controls and Accessories § 23.1155 Turbine engine reverse thrust and propeller pitch settings below the flight regime. For turbine engine installations, each...

  1. 14 CFR 23.1155 - Turbine engine reverse thrust and propeller pitch settings below the flight regime.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Turbine engine reverse thrust and propeller... COMMUTER CATEGORY AIRPLANES Powerplant Powerplant Controls and Accessories § 23.1155 Turbine engine reverse thrust and propeller pitch settings below the flight regime. For turbine engine installations, each...

  2. 14 CFR 23.1155 - Turbine engine reverse thrust and propeller pitch settings below the flight regime.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Turbine engine reverse thrust and propeller... COMMUTER CATEGORY AIRPLANES Powerplant Powerplant Controls and Accessories § 23.1155 Turbine engine reverse thrust and propeller pitch settings below the flight regime. For turbine engine installations, each...

  3. 14 CFR 23.1155 - Turbine engine reverse thrust and propeller pitch settings below the flight regime.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Turbine engine reverse thrust and propeller... COMMUTER CATEGORY AIRPLANES Powerplant Powerplant Controls and Accessories § 23.1155 Turbine engine reverse thrust and propeller pitch settings below the flight regime. For turbine engine installations, each...

  4. 14 CFR 23.1155 - Turbine engine reverse thrust and propeller pitch settings below the flight regime.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Turbine engine reverse thrust and propeller... COMMUTER CATEGORY AIRPLANES Powerplant Powerplant Controls and Accessories § 23.1155 Turbine engine reverse thrust and propeller pitch settings below the flight regime. For turbine engine installations, each...

  5. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    PubMed Central

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan

    2010-01-01

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463

  6. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.

    PubMed

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan

    2010-01-01

    Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  7. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  8. Extending the boundaries of reverse engineering

    NASA Astrophysics Data System (ADS)

    Lawrie, Chris

    2002-04-01

    In today's market place the potential of Reverse Engineering as a time compression tool is commonly lost under its traditional definition. The term Reverse Engineering was coined way back at the advent of CMM machines and 3D CAD systems to describe the process of fitting surfaces to captured point data. Since these early beginnings, downstream hardware scanning and digitising systems have evolved in parallel with an upstream demand, greatly increasing the potential of a point cloud data set within engineering design and manufacturing processes. The paper will discuss the issues surrounding Reverse Engineering at the turn of the millennium.

  9. Reversible Data Hiding Based on DNA Computing

    PubMed Central

    Xie, Yingjie

    2017-01-01

    Biocomputing, especially DNA, computing has got great development. It is widely used in information security. In this paper, a novel algorithm of reversible data hiding based on DNA computing is proposed. Inspired by the algorithm of histogram modification, which is a classical algorithm for reversible data hiding, we combine it with DNA computing to realize this algorithm based on biological technology. Compared with previous results, our experimental results have significantly improved the ER (Embedding Rate). Furthermore, some PSNR (peak signal-to-noise ratios) of test images are also improved. Experimental results show that it is suitable for protecting the copyright of cover image in DNA-based information security. PMID:28280504

  10. Large-Scale Wind-Tunnel Tests of Exhaust Ingestion Due to Thrust Reversal on a Four-Engine Jet Transport during Ground Roll

    NASA Technical Reports Server (NTRS)

    Tolhurst, William H., Jr.; Hickey, David H.; Aoyagi, Kiyoshi

    1961-01-01

    Wind-tunnel tests have been conducted on a large-scale model of a swept-wing jet transport type airplane to study the factors affecting exhaust gas ingestion into the engine inlets when thrust reversal is used during ground roll. The model was equipped with four small jet engines mounted in nacelles beneath the wing. The tests included studies of both cascade and target type reversers. The data obtained included the free-stream velocity at the occurrence of exhaust gas ingestion in the outboard engine and the increment of drag due to thrust reversal for various modifications of thrust reverser configuration. Motion picture films of smoke flow studies were also obtained to supplement the data. The results show that the free-stream velocity at which ingestion occurred in the outboard engines could be reduced considerably, by simple modifications to the reversers, without reducing the effective drag due to reversed thrust.

  11. Enhanced encrypted reversible data hiding algorithm with minimum distortion through homomorphic encryption

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Rupali

    2018-03-01

    Reversible data hiding means embedding a secret message in a cover image in such a manner, to the point that in the midst of extraction of the secret message, the cover image and, furthermore, the secret message are recovered with no error. The goal of by far most of the reversible data hiding algorithms is to have improved the embedding rate and enhanced visual quality of stego image. An improved encrypted-domain-based reversible data hiding algorithm to embed two binary bits in each gray pixel of original cover image with minimum distortion of stego-pixels is employed in this paper. Highlights of the proposed algorithm are minimum distortion of pixel's value, elimination of underflow and overflow problem, and equivalence of stego image and cover image with a PSNR of ∞ (for Lena, Goldhill, and Barbara image). The experimental outcomes reveal that in terms of average PSNR and embedding rate, for natural images, the proposed algorithm performed better than other conventional ones.

  12. Reverse thrust performance of the QCSEE variable pitch turbofan engine

    NASA Technical Reports Server (NTRS)

    Samanich, N. E.; Reemsnyder, D. C.; Blodmer, H. E.

    1980-01-01

    Results of steady state reverse and forward to reverse thrust transient performance tests are presented. The original quiet, clean, short haul, experimental engine four segment variable fan nozzle was retested in reverse and compared with a continuous, 30 deg half angle conical exlet. Data indicated that the significantly more stable, higher pressure recovery flow with the fixed 30 deg exlet resulted in lower engine vibrations, lower fan blade stress, and approximately a 20 percent improvement in reverse thrust. Objective reverse thrust of 35 percent of takeoff thrust was reached. Thrust response of less than 1.5 sec was achieved for the approach and the takeoff to reverse thrust transients.

  13. MoCha: Molecular Characterization of Unknown Pathways.

    PubMed

    Lobo, Daniel; Hammelman, Jennifer; Levin, Michael

    2016-04-01

    Automated methods for the reverse-engineering of complex regulatory networks are paving the way for the inference of mechanistic comprehensive models directly from experimental data. These novel methods can infer not only the relations and parameters of the known molecules defined in their input datasets, but also unknown components and pathways identified as necessary by the automated algorithms. Identifying the molecular nature of these unknown components is a crucial step for making testable predictions and experimentally validating the models, yet no specific and efficient tools exist to aid in this process. To this end, we present here MoCha (Molecular Characterization), a tool optimized for the search of unknown proteins and their pathways from a given set of known interacting proteins. MoCha uses the comprehensive dataset of protein-protein interactions provided by the STRING database, which currently includes more than a billion interactions from over 2,000 organisms. MoCha is highly optimized, performing typical searches within seconds. We demonstrate the use of MoCha with the characterization of unknown components from reverse-engineered models from the literature. MoCha is useful for working on network models by hand or as a downstream step of a model inference engine workflow and represents a valuable and efficient tool for the characterization of unknown pathways using known data from thousands of organisms. MoCha and its source code are freely available online under the GPLv3 license.

  14. 14 CFR 23.933 - Reversing systems.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... analysis and testing completed by the engine and propeller manufacturers. [Doc. No. 26344, 58 FR 18971, Apr... only must be designed so that, during any reversal in flight, the engine will produce no more than... engine from producing more than idle thrust when the reversing system malfunctions; except that it may...

  15. 14 CFR 23.933 - Reversing systems.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... analysis and testing completed by the engine and propeller manufacturers. [Doc. No. 26344, 58 FR 18971, Apr... only must be designed so that, during any reversal in flight, the engine will produce no more than... engine from producing more than idle thrust when the reversing system malfunctions; except that it may...

  16. 14 CFR 23.933 - Reversing systems.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... analysis and testing completed by the engine and propeller manufacturers. [Doc. No. 26344, 58 FR 18971, Apr... only must be designed so that, during any reversal in flight, the engine will produce no more than... engine from producing more than idle thrust when the reversing system malfunctions; except that it may...

  17. 14 CFR 23.933 - Reversing systems.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... analysis and testing completed by the engine and propeller manufacturers. [Doc. No. 26344, 58 FR 18971, Apr... only must be designed so that, during any reversal in flight, the engine will produce no more than... engine from producing more than idle thrust when the reversing system malfunctions; except that it may...

  18. 14 CFR 23.933 - Reversing systems.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... analysis and testing completed by the engine and propeller manufacturers. [Doc. No. 26344, 58 FR 18971, Apr... only must be designed so that, during any reversal in flight, the engine will produce no more than... engine from producing more than idle thrust when the reversing system malfunctions; except that it may...

  19. Versatile digital micromirror device-based method for the recording of multilevel optical diffractive elements in photosensitive chalcogenide layers (AMTIR-1).

    PubMed

    Joerg, Alexandre; Vignaux, Mael; Lumeau, Julien

    2016-08-01

    A new alternative and versatile method for the production of diffractive optical elements (DOEs) with up to four phase levels in AMTIR-1 (Ge33As12Se55) layers is demonstrated. The developed method proposes the use of the photosensitive properties of the layers and a specific in situ optical monitoring coupled with a reverse engineering algorithm to control the trigger points of the writing of the different diffractive patterns. Examples of various volume DOEs are presented.

  20. Improving the quantum cost of reversible Boolean functions using reorder algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Taghreed; Younes, Ahmed; Elsayed, Ashraf

    2018-05-01

    This paper introduces a novel algorithm to synthesize a low-cost reversible circuits for any Boolean function with n inputs represented as a Positive Polarity Reed-Muller expansion. The proposed algorithm applies a predefined rules to reorder the terms in the function to minimize the multi-calculation of common parts of the Boolean function to decrease the quantum cost of the reversible circuit. The paper achieves a decrease in the quantum cost and/or the circuit length, on average, when compared with relevant work in the literature.

  1. Edge Pushing is Equivalent to Vertex Elimination for Computing Hessians

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Mu; Pothen, Alex; Hovland, Paul

    We prove the equivalence of two different Hessian evaluation algorithms in AD. The first is the Edge Pushing algorithm of Gower and Mello, which may be viewed as a second order Reverse mode algorithm for computing the Hessian. In earlier work, we have derived the Edge Pushing algorithm by exploiting a Reverse mode invariant based on the concept of live variables in compiler theory. The second algorithm is based on eliminating vertices in a computational graph of the gradient, in which intermediate variables are successively eliminated from the graph, and the weights of the edges are updated suitably. We provemore » that if the vertices are eliminated in a reverse topological order while preserving symmetry in the computational graph of the gradient, then the Vertex Elimination algorithm and the Edge Pushing algorithm perform identical computations. In this sense, the two algorithms are equivalent. This insight that unifies two seemingly disparate approaches to Hessian computations could lead to improved algorithms and implementations for computing Hessians. Read More: http://epubs.siam.org/doi/10.1137/1.9781611974690.ch11« less

  2. A New Efficient Algorithm for the All Sorting Reversals Problem with No Bad Components.

    PubMed

    Wang, Biing-Feng

    2016-01-01

    The problem of finding all reversals that take a permutation one step closer to a target permutation is called the all sorting reversals problem (the ASR problem). For this problem, Siepel had an O(n (3))-time algorithm. Most complications of his algorithm stem from some peculiar structures called bad components. Since bad components are very rare in both real and simulated data, it is practical to study the ASR problem with no bad components. For the ASR problem with no bad components, Swenson et al. gave an O (n(2))-time algorithm. Very recently, Swenson found that their algorithm does not always work. In this paper, a new algorithm is presented for the ASR problem with no bad components. The time complexity is O(n(2)) in the worst case and is linear in the size of input and output in practice.

  3. A novel image encryption algorithm using chaos and reversible cellular automata

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Luan, Dapeng

    2013-11-01

    In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.

  4. On the estimation algorithm used in adaptive performance optimization of turbofan engines

    NASA Technical Reports Server (NTRS)

    Espana, Martin D.; Gilyard, Glenn B.

    1993-01-01

    The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.

  5. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.

    2010-01-15

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, amore » chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.« less

  6. 14 CFR 25.933 - Reversing systems.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... reversal in flight the engine will produce no more than flight idle thrust. In addition, it must be shown... kind of failure is extremely remote. (3) Each system must have means to prevent the engine from... alone, under the most critical reversing condition expected in operation. (b) For propeller reversing...

  7. 14 CFR 25.933 - Reversing systems.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reversal in flight the engine will produce no more than flight idle thrust. In addition, it must be shown... kind of failure is extremely remote. (3) Each system must have means to prevent the engine from... alone, under the most critical reversing condition expected in operation. (b) For propeller reversing...

  8. 14 CFR 25.933 - Reversing systems.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... reversal in flight the engine will produce no more than flight idle thrust. In addition, it must be shown... kind of failure is extremely remote. (3) Each system must have means to prevent the engine from... alone, under the most critical reversing condition expected in operation. (b) For propeller reversing...

  9. Two-Microphone Spatial Filtering Improves Speech Reception for Cochlear-Implant Users in Reverberant Conditions With Multiple Noise Sources

    PubMed Central

    2014-01-01

    This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772

  10. A community effort to assess and improve drug sensitivity prediction algorithms

    PubMed Central

    Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo

    2015-01-01

    Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods. PMID:24880487

  11. A community effort to assess and improve drug sensitivity prediction algorithms.

    PubMed

    Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo

    2014-12-01

    Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods.

  12. A Novel and Simple Spike Sorting Implementation.

    PubMed

    Petrantonakis, Panagiotis C; Poirazi, Panayiota

    2017-04-01

    Monitoring the activity of multiple, individual neurons that fire spikes in the vicinity of an electrode, namely perform a Spike Sorting (SS) procedure, comprises one of the most important tools for contemporary neuroscience in order to reverse-engineer the brain. As recording electrodes' technology rabidly evolves by integrating thousands of electrodes in a confined spatial setting, the algorithms that are used to monitor individual neurons from recorded signals have to become even more reliable and computationally efficient. In this work, we propose a novel framework of the SS approach in which a single-step processing of the raw (unfiltered) extracellular signal is sufficient for both the detection and sorting of the activity of individual neurons. Despite its simplicity, the proposed approach exhibits comparable performance with state-of-the-art approaches, especially for spike detection in noisy signals, and paves the way for a new family of SS algorithms with the potential for multi-recording, fast, on-chip implementations.

  13. Hybrid grammar-based approach to nonlinear dynamical system identification from biological time series

    NASA Astrophysics Data System (ADS)

    McKinney, B. A.; Crowe, J. E., Jr.; Voss, H. U.; Crooke, P. S.; Barney, N.; Moore, J. H.

    2006-02-01

    We introduce a grammar-based hybrid approach to reverse engineering nonlinear ordinary differential equation models from observed time series. This hybrid approach combines a genetic algorithm to search the space of model architectures with a Kalman filter to estimate the model parameters. Domain-specific knowledge is used in a context-free grammar to restrict the search space for the functional form of the target model. We find that the hybrid approach outperforms a pure evolutionary algorithm method, and we observe features in the evolution of the dynamical models that correspond with the emergence of favorable model components. We apply the hybrid method to both artificially generated time series and experimentally observed protein levels from subjects who received the smallpox vaccine. From the observed data, we infer a cytokine protein interaction network for an individual’s response to the smallpox vaccine.

  14. General Multimechanism Reversible-Irreversible Time-Dependent Constitutive Deformation Model Being Developed

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Arnold, Steven M.

    2001-01-01

    Since most advanced material systems (for example metallic-, polymer-, and ceramic-based systems) being currently researched and evaluated are for high-temperature airframe and propulsion system applications, the required constitutive models must account for both reversible and irreversible time-dependent deformations. Furthermore, since an integral part of continuum-based computational methodologies (be they microscale- or macroscale-based) is an accurate and computationally efficient constitutive model to describe the deformation behavior of the materials of interest, extensive research efforts have been made over the years on the phenomenological representations of constitutive material behavior in the inelastic analysis of structures. From a more recent and comprehensive perspective, the NASA Glenn Research Center in conjunction with the University of Akron has emphasized concurrently addressing three important and related areas: that is, 1) Mathematical formulation; 2) Algorithmic developments for updating (integrating) the external (e.g., stress) and internal state variables; 3) Parameter estimation for characterizing the model. This concurrent perspective to constitutive modeling has enabled the overcoming of the two major obstacles to fully utilizing these sophisticated time-dependent (hereditary) constitutive models in practical engineering analysis. These obstacles are: 1) Lack of efficient and robust integration algorithms; 2) Difficulties associated with characterizing the large number of required material parameters, particularly when many of these parameters lack obvious or direct physical interpretations.

  15. DIC-CAM recipe for reverse engineering

    NASA Astrophysics Data System (ADS)

    Romero-Carrillo, P.; Lopez-Alba, E.; Dorado, R.; Diaz-Garrido, F. A.

    2012-04-01

    Reverse engineering (RE) tries to model and manufacture an object from measurements one of a reference object. Modern optical measurement systems and computer aided engineering software have improved reverse engineering procedures. We detail the main RE steps from 3D digitalization by Digital Image Correlation to manufacturing. The previous description is complemented with an application example, which portrays the performance of RE. The differences between original and manufactured objects are less than 2 mm (close to the tool radius).

  16. Flight Measurements of the Effect of a Controllable Thrust Reverser on the Flight Characteristics of a Single-Engine Jet Airplane

    NASA Technical Reports Server (NTRS)

    Anderson, Seth B.; Cooper, George E.; Faye, Alan E., Jr.

    1959-01-01

    A flight investigation was undertaken to determine the effect of a fully controllable thrust reverser on the flight characteristics of a single-engine jet airplane. Tests were made using a cylindrical target-type reverser actuated by a hydraulic cylinder through a "beep-type" cockpit control mounted at the base of the throttle. The thrust reverser was evaluated as an in-flight decelerating device, as a flight path control and airspeed control in landing approach, and as a braking device during the ground roll. Full deflection of the reverser for one reverser configuration resulted in a reverse thrust ratio of as much as 85 percent, which at maximum engine power corresponded to a reversed thrust of 5100 pounds. Use of the reverser in landing approach made possible a wide selection of approach angles, a large reduction in approach speed at steep approach angles, improved control of flight path angle, and more accuracy in hitting a given touchdown point. The use of the reverser as a speed brake at lower airspeeds was compromised by a longitudinal trim change. At the lower airspeeds and higher engine powers there was insufficient elevator power to overcome the nose-down trim change at full reverser deflection.

  17. An improved reversible data hiding algorithm based on modification of prediction errors

    NASA Astrophysics Data System (ADS)

    Jafar, Iyad F.; Hiary, Sawsan A.; Darabkh, Khalid A.

    2014-04-01

    Reversible data hiding algorithms are concerned with the ability of hiding data and recovering the original digital image upon extraction. This issue is of interest in medical and military imaging applications. One particular class of such algorithms relies on the idea of histogram shifting of prediction errors. In this paper, we propose an improvement over one popular algorithm in this class. The improvement is achieved by employing a different predictor, the use of more bins in the prediction error histogram in addition to multilevel embedding. The proposed extension shows significant improvement over the original algorithm and its variations.

  18. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.

    2016-01-01

    Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

  19. An O([Formula: see text]) algorithm for sorting signed genomes by reversals, transpositions, transreversals and block-interchanges.

    PubMed

    Yu, Shuzhi; Hao, Fanchang; Leong, Hon Wai

    2016-02-01

    We consider the problem of sorting signed permutations by reversals, transpositions, transreversals, and block-interchanges. The problem arises in the study of species evolution via large-scale genome rearrangement operations. Recently, Hao et al. gave a 2-approximation scheme called genome sorting by bridges (GSB) for solving this problem. Their result extended and unified the results of (i) He and Chen - a 2-approximation algorithm allowing reversals, transpositions, and block-interchanges (by also allowing transversals) and (ii) Hartman and Sharan - a 1.5-approximation algorithm allowing reversals, transpositions, and transversals (by also allowing block-interchanges). The GSB result is based on introduction of three bridge structures in the breakpoint graph, the L-bridge, T-bridge, and X-bridge that models goodreversal, transposition/transreversal, and block-interchange, respectively. However, the paper by Hao et al. focused on proving the 2-approximation GSB scheme and only mention a straightforward [Formula: see text] algorithm. In this paper, we give an [Formula: see text] algorithm for implementing the GSB scheme. The key idea behind our faster GSB algorithm is to represent cycles in the breakpoint graph by their canonical sequences, which greatly simplifies the search for these bridge structures. We also give some comparison results (running time and computed distances) against the original GSB implementation.

  20. Engineering Encounters: Reverse Engineering

    ERIC Educational Resources Information Center

    McGowan, Veronica Cassone; Ventura, Marcia; Bell, Philip

    2017-01-01

    This column presents ideas and techniques to enhance your science teaching. This month's issue shares information on how students' everyday experiences can support science learning through engineering design. In this article, the authors outline a reverse-engineering model of instruction and describe one example of how it looked in our fifth-grade…

  1. Performance Characteristics of the Reverse Syphilis Screening Algorithm in a Population With a Moderately High Prevalence of Syphilis.

    PubMed

    Rourk, Angela R; Nolte, Frederick S; Litwin, Christine M

    2016-11-01

    With the recent introduction of automated treponemal tests, a new reverse syphilis algorithm has been proposed and now used by many clinical laboratories. We analyzed the impact of instituting the reverse screening syphilis algorithm in a laboratory that serves a geographic area with a moderately high prevalence of syphilis infection. Serum samples sent for syphilis testing were tested using a treponemal enzyme immunoassay (EIA) as the screening assay. EIA reactive samples were tested by rapid plasma reagin (RPR) and titered to end point if reactive. RPR nonreactive samples were analyzed by the Treponema pallidum particle agglutination test (TP-PA). Pertinent medical records were reviewed for false-reactive screens and samples with evidence of past syphilis infection. Among 10,060 patients tested, 502 (5%) were reactive on the initial EIA screen. The RPR was reactive in 150 (1.5%). TP-PA testing determined that 103 (1.0%) were falsely reactive on initial EIA screen. The reverse screening algorithm, however, identified 242 (2.4%) with evidence of latent, secondary, or past syphilis, 21 of whom had no or unknown prior treatment with antibiotics. Despite a 1.0% false-reactive rate, the reverse syphilis algorithm detected 21 patients with possible latent syphilis that may have gone undetected by traditional syphilis screening. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  2. An improved clustering algorithm based on reverse learning in intelligent transportation

    NASA Astrophysics Data System (ADS)

    Qiu, Guoqing; Kou, Qianqian; Niu, Ting

    2017-05-01

    With the development of artificial intelligence and data mining technology, big data has gradually entered people's field of vision. In the process of dealing with large data, clustering is an important processing method. By introducing the reverse learning method in the clustering process of PAM clustering algorithm, to further improve the limitations of one-time clustering in unsupervised clustering learning, and increase the diversity of clustering clusters, so as to improve the quality of clustering. The algorithm analysis and experimental results show that the algorithm is feasible.

  3. Towards a Rigorous Assessment of Systems Biology Models: The DREAM3 Challenges

    PubMed Central

    Prill, Robert J.; Marbach, Daniel; Saez-Rodriguez, Julio; Sorger, Peter K.; Alexopoulos, Leonidas G.; Xue, Xiaowei; Clarke, Neil D.; Altan-Bonnet, Gregoire; Stolovitzky, Gustavo

    2010-01-01

    Background Systems biology has embraced computational modeling in response to the quantitative nature and increasing scale of contemporary data sets. The onslaught of data is accelerating as molecular profiling technology evolves. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) is a community effort to catalyze discussion about the design, application, and assessment of systems biology models through annual reverse-engineering challenges. Methodology and Principal Findings We describe our assessments of the four challenges associated with the third DREAM conference which came to be known as the DREAM3 challenges: signaling cascade identification, signaling response prediction, gene expression prediction, and the DREAM3 in silico network challenge. The challenges, based on anonymized data sets, tested participants in network inference and prediction of measurements. Forty teams submitted 413 predicted networks and measurement test sets. Overall, a handful of best-performer teams were identified, while a majority of teams made predictions that were equivalent to random. Counterintuitively, combining the predictions of multiple teams (including the weaker teams) can in some cases improve predictive power beyond that of any single method. Conclusions DREAM provides valuable feedback to practitioners of systems biology modeling. Lessons learned from the predictions of the community provide much-needed context for interpreting claims of efficacy of algorithms described in the scientific literature. PMID:20186320

  4. An Evaluation of Active Learning Causal Discovery Methods for Reverse-Engineering Local Causal Pathways of Gene Regulation

    PubMed Central

    Ma, Sisi; Kemmeren, Patrick; Aliferis, Constantin F.; Statnikov, Alexander

    2016-01-01

    Reverse-engineering of causal pathways that implicate diseases and vital cellular functions is a fundamental problem in biomedicine. Discovery of the local causal pathway of a target variable (that consists of its direct causes and direct effects) is essential for effective intervention and can facilitate accurate diagnosis and prognosis. Recent research has provided several active learning methods that can leverage passively observed high-throughput data to draft causal pathways and then refine the inferred relations with a limited number of experiments. The current study provides a comprehensive evaluation of the performance of active learning methods for local causal pathway discovery in real biological data. Specifically, 54 active learning methods/variants from 3 families of algorithms were applied for local causal pathways reconstruction of gene regulation for 5 transcription factors in S. cerevisiae. Four aspects of the methods’ performance were assessed, including adjacency discovery quality, edge orientation accuracy, complete pathway discovery quality, and experimental cost. The results of this study show that some methods provide significant performance benefits over others and therefore should be routinely used for local causal pathway discovery tasks. This study also demonstrates the feasibility of local causal pathway reconstruction in real biological systems with significant quality and low experimental cost. PMID:26939894

  5. Un-Building Blocks: A Model of Reverse Engineering and Applicable Heuristics

    DTIC Science & Technology

    2015-12-01

    CONCLUSIONS The machine does not isolate man from the great problems of nature but plunges him more deeply into them. Antoine de Saint-Exupery— Wind ...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Reverse engineering is the problem -solving activity that ensues when one takes a...Douglas Moses, Vice Provost for Academic Affairs iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT Reverse engineering is the problem -solving

  6. 14 CFR 33.97 - Thrust reversers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Thrust reversers. 33.97 Section 33.97 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.97 Thrust reversers. (a) If the...

  7. 14 CFR 33.97 - Thrust reversers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Thrust reversers. 33.97 Section 33.97 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.97 Thrust reversers. (a) If the...

  8. 14 CFR 33.97 - Thrust reversers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Thrust reversers. 33.97 Section 33.97 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.97 Thrust reversers. (a) If the...

  9. 14 CFR 33.97 - Thrust reversers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.97 Thrust reversers. (a) If the... this subpart must be run with the reverser installed. In complying with this section, the power control... regimes of control operations are incorporated necessitating scheduling of the power-control lever motion...

  10. 14 CFR 33.97 - Thrust reversers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.97 Thrust reversers. (a) If the... this subpart must be run with the reverser installed. In complying with this section, the power control... regimes of control operations are incorporated necessitating scheduling of the power-control lever motion...

  11. A Comparative Study of Optimization Algorithms for Engineering Synthesis.

    DTIC Science & Technology

    1983-03-01

    the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first

  12. Reverse Core Engine with Thrust Reverser

    NASA Technical Reports Server (NTRS)

    Chandler, Jesse M. (Inventor); Suciu, Gabriel L. (Inventor)

    2017-01-01

    An engine system has a gas generator, a bi-fi wall surrounding at least a portion of the gas generator, a casing surrounding a fan, and the casing having first and second thrust reverser doors which in a deployed position abut each other and the bi-fi wall.

  13. Analysis of Ten Reverse Engineering Tools

    NASA Astrophysics Data System (ADS)

    Koskinen, Jussi; Lehmonen, Tero

    Reverse engineering tools can be used in satisfying the information needs of software maintainers. Especially in case of maintaining large-scale legacy systems tool support is essential. Reverse engineering tools provide various kinds of capabilities to provide the needed information to the tool user. In this paper we analyze the provided capabilities in terms of four aspects: provided data structures, visualization mechanisms, information request specification mechanisms, and navigation features. We provide a compact analysis of ten representative reverse engineering tools for supporting C, C++ or Java: Eclipse Java Development Tools, Wind River Workbench (for C and C++), Understand (for C++), Imagix 4D, Creole, Javadoc, Javasrc, Source Navigator, Doxygen, and HyperSoft. The results of the study supplement the earlier findings in this important area.

  14. Authenticity preservation with histogram-based reversible data hiding and quadtree concepts.

    PubMed

    Huang, Hsiang-Cheh; Fang, Wai-Chi

    2011-01-01

    With the widespread use of identification systems, establishing authenticity with sensors has become an important research issue. Among the schemes for making authenticity verification based on information security possible, reversible data hiding has attracted much attention during the past few years. With its characteristics of reversibility, the scheme is required to fulfill the goals from two aspects. On the one hand, at the encoder, the secret information needs to be embedded into the original image by some algorithms, such that the output image will resemble the input one as much as possible. On the other hand, at the decoder, both the secret information and the original image must be correctly extracted and recovered, and they should be identical to their embedding counterparts. Under the requirement of reversibility, for evaluating the performance of the data hiding algorithm, the output image quality, named imperceptibility, and the number of bits for embedding, called capacity, are the two key factors to access the effectiveness of the algorithm. Besides, the size of side information for making decoding possible should also be evaluated. Here we consider using the characteristics of original images for developing our method with better performance. In this paper, we propose an algorithm that has the ability to provide more capacity than conventional algorithms, with similar output image quality after embedding, and comparable side information produced. Simulation results demonstrate the applicability and better performance of our algorithm.

  15. Predictive minimum description length principle approach to inferring gene regulatory networks.

    PubMed

    Chaitankar, Vijender; Zhang, Chaoyang; Ghosh, Preetam; Gong, Ping; Perkins, Edward J; Deng, Youping

    2011-01-01

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm is evaluated using both synthetic time series data sets and a biological time series data set (Saccharomyces cerevisiae). The results show that the proposed algorithm produced fewer false edges and significantly improved the precision when compared to existing MDL algorithm.

  16. Contact mechanics of reverse engineered distal humeral hemiarthroplasty implants.

    PubMed

    Willing, Ryan; King, Graham J W; Johnson, James A

    2015-11-26

    Erosion of articular cartilage is a concern following distal humeral hemiarthroplasty, because native cartilage surfaces are placed in contact with stiff metallic implant components, which causes decreases in contact area and increases in contact stresses. Recently, reverse engineered implants have been proposed which are intended to promote more natural contact mechanics by reproducing the native bone or cartilage shape. In this study, finite element modeling is used in order to calculate changes in cartilage contact areas and stresses following distal humeral hemiarthroplasty with commercially available and reverse engineered implant designs. At the ulna, decreases in contact area were -34±3% (p=0.002), -27±1% (p<0.001) and -14±2% (p=0.008) using commercially available, bone reverse engineered and cartilage reverse engineered designs, respectively. Peak contact stresses increased by 461±57% (p=0.008), 387±127% (p=0.229) and 165±16% (p=0.003). At the radius, decreases in contact area were -21±3% (p=0.013), -13±2% (p<0.006) and -6±1% (p=0.020), and peak contact stresses increased by 75±52% (p>0.999), 241±32% (p=0.010) and 61±10% (p=0.021). Between the three different implant designs, the cartilage reverse engineered design yielded the largest contact areas and lowest contact stresses, but was still unable to reproduce the contact mechanics of the native joint. These findings align with a growing body of evidence indicating that although reverse engineered hemiarthroplasty implants can provide small improvements in contact mechanics when compared with commercially available designs, further optimization of shape and material properties is required in order reproduce native joint contact mechanics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Analysis of temporal gene expression profiles: clustering by simulated annealing and determining the optimal number of clusters.

    PubMed

    Lukashin, A V; Fuchs, R

    2001-05-01

    Cluster analysis of genome-wide expression data from DNA microarray hybridization studies has proved to be a useful tool for identifying biologically relevant groupings of genes and samples. In the present paper, we focus on several important issues related to clustering algorithms that have not yet been fully studied. We describe a simple and robust algorithm for the clustering of temporal gene expression profiles that is based on the simulated annealing procedure. In general, this algorithm guarantees to eventually find the globally optimal distribution of genes over clusters. We introduce an iterative scheme that serves to evaluate quantitatively the optimal number of clusters for each specific data set. The scheme is based on standard approaches used in regular statistical tests. The basic idea is to organize the search of the optimal number of clusters simultaneously with the optimization of the distribution of genes over clusters. The efficiency of the proposed algorithm has been evaluated by means of a reverse engineering experiment, that is, a situation in which the correct distribution of genes over clusters is known a priori. The employment of this statistically rigorous test has shown that our algorithm places greater than 90% genes into correct clusters. Finally, the algorithm has been tested on real gene expression data (expression changes during yeast cell cycle) for which the fundamental patterns of gene expression and the assignment of genes to clusters are well understood from numerous previous studies.

  18. The 727 airplane target thrust reverser static performance model test for refanned JT8D engines

    NASA Technical Reports Server (NTRS)

    Chow, C. T. P.; Atkey, E. N.

    1974-01-01

    The results of a scale model static performance test of target thrust reverser configurations for the Pratt and Whitney Aircraft JT8D-100 series engine are presented. The objective of the test was to select a series of suitable candidate reverser configurations for the subsequent airplane model wind tunnel ingestion and flight controls tests. Test results indicate that adequate reverse thrust performance with compatible engine airflow match is achievable for the selected configurations. Tapering of the lips results in loss of performance and only minimal flow directivity. Door pressure surveys were conducted on a selected number of lip and fence configurations to obtain data to support the design of the thrust reverser system.

  19. 14 CFR 25.1305 - Powerplant instruments.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... reverse pitch, for each reversing propeller. (c) For turbine engine-powered airplanes. In addition to the... required: (1) A gas temperature indicator for each engine. (2) A fuel flowmeter indicator for each engine... operated continuously but that is neither designed for continuous operation nor designed to prevent hazard...

  20. 14 CFR 25.1305 - Powerplant instruments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... reverse pitch, for each reversing propeller. (c) For turbine engine-powered airplanes. In addition to the... required: (1) A gas temperature indicator for each engine. (2) A fuel flowmeter indicator for each engine... operated continuously but that is neither designed for continuous operation nor designed to prevent hazard...

  1. 14 CFR 25.1305 - Powerplant instruments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reverse pitch, for each reversing propeller. (c) For turbine engine-powered airplanes. In addition to the... required: (1) A gas temperature indicator for each engine. (2) A fuel flowmeter indicator for each engine... operated continuously but that is neither designed for continuous operation nor designed to prevent hazard...

  2. 14 CFR 25.1305 - Powerplant instruments.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... reverse pitch, for each reversing propeller. (c) For turbine engine-powered airplanes. In addition to the... required: (1) A gas temperature indicator for each engine. (2) A fuel flowmeter indicator for each engine... operated continuously but that is neither designed for continuous operation nor designed to prevent hazard...

  3. 14 CFR 25.1305 - Powerplant instruments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... reverse pitch, for each reversing propeller. (c) For turbine engine-powered airplanes. In addition to the... required: (1) A gas temperature indicator for each engine. (2) A fuel flowmeter indicator for each engine... operated continuously but that is neither designed for continuous operation nor designed to prevent hazard...

  4. Reverse engineering and identification in systems biology: strategies, perspectives and challenges.

    PubMed

    Villaverde, Alejandro F; Banga, Julio R

    2014-02-06

    The interplay of mathematical modelling with experiments is one of the central elements in systems biology. The aim of reverse engineering is to infer, analyse and understand, through this interplay, the functional and regulatory mechanisms of biological systems. Reverse engineering is not exclusive of systems biology and has been studied in different areas, such as inverse problem theory, machine learning, nonlinear physics, (bio)chemical kinetics, control theory and optimization, among others. However, it seems that many of these areas have been relatively closed to outsiders. In this contribution, we aim to compare and highlight the different perspectives and contributions from these fields, with emphasis on two key questions: (i) why are reverse engineering problems so hard to solve, and (ii) what methods are available for the particular problems arising from systems biology?

  5. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    PubMed

    Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo

    2015-05-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  6. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  7. Project Whitefeather

    NASA Astrophysics Data System (ADS)

    Four primary tasks have been carried out in this program. Upon request of LANL, the Eloranta paper was reviewed. It was determined that the correlation solution presented was too computationally complex to execute in the allocated 1 second update time. An alternative algorithm approach was under taken using a simulation baseline. A simulation was developed and applied to generate synthetic LIDAR data from randomized aerosol patches drifting with the wind. Algorithms have been designed and implemented in the simulation to reduce the data and apply it to obtain wind estimates. A substantial effort was completed in reverse engineering the EVIEW data format structure of the supplied data. Finally the collected, LIDAR data has been examined to obtain an assessment of the prospects for successful wind estimation. Unfortunately, the data examination has not shown good prospects for a successful outcome. It is recommended that future data be taken with the procedure previously outlined. Hercules believes that if lidar data is collected using this procedure that wind information will be as successful using the collected data as it was in simulation.

  8. Hamilton Standard Q-fan demonstrator dynamic pitch change test program, volume 1

    NASA Technical Reports Server (NTRS)

    Demers, W. J.; Nelson, D. J.; Wainauski, H. S.

    1975-01-01

    Tests of a full scale variable pitch fan engine to obtain data on the structural characteristics, response times, and fan/core engine compatibility during transient changes in blade angle, fan rpm, and engine power is reported. Steady state reverse thrust tests with a take off nozzle configuration were also conducted. The 1.4 meter diameter, 13 bladed controllable pitch fan was driven by a T55 L 11A engine with power and blade angle coordinated by a digital computer. The tests demonstrated an ability to change from full forward thrust to reverse thrust in less than one (1) second. Reverse thrust was effected through feather and through flat pitch; structural characteristics and engine/fan compatibility were within satisfactory limits.

  9. Reverse engineering and identification in systems biology: strategies, perspectives and challenges

    PubMed Central

    Villaverde, Alejandro F.; Banga, Julio R.

    2014-01-01

    The interplay of mathematical modelling with experiments is one of the central elements in systems biology. The aim of reverse engineering is to infer, analyse and understand, through this interplay, the functional and regulatory mechanisms of biological systems. Reverse engineering is not exclusive of systems biology and has been studied in different areas, such as inverse problem theory, machine learning, nonlinear physics, (bio)chemical kinetics, control theory and optimization, among others. However, it seems that many of these areas have been relatively closed to outsiders. In this contribution, we aim to compare and highlight the different perspectives and contributions from these fields, with emphasis on two key questions: (i) why are reverse engineering problems so hard to solve, and (ii) what methods are available for the particular problems arising from systems biology? PMID:24307566

  10. Time reversal acoustics for small targets using decomposition of the time reversal operator

    NASA Astrophysics Data System (ADS)

    Simko, Peter C.

    The method of time reversal acoustics has been the focus of considerable interest over the last twenty years. Time reversal imaging methods have made consistent progress as effective methods for signal processing since the initial demonstration that physical time reversal methods can be used to form convergent wave fields on a localized target, even under conditions of severe multipathing. Computational time reversal methods rely on the properties of the so-called 'time reversal operator' in order to extract information about the target medium. Applications for which time reversal imaging have previously been explored include medical imaging, non-destructive evaluation, and mine detection. Emphasis in this paper will fall on two topics within the general field of computational time reversal imaging. First, we will examine previous work on developing a time reversal imaging algorithm based on the MUltiple SIgnal Classification (MUSIC) algorithm. MUSIC, though computationally very intensive, has demonstrated early promise in simulations using array-based methods applicable to true volumetric (three-dimensional) imaging. We will provide a simple algorithm through which the rank of the time reversal operator subspaces can be properly quantified so that the rank of the associated null subspace can be accurately estimated near the central pulse wavelength in broadband imaging. Second, we will focus on the scattering from small acoustically rigid two dimensional cylindrical targets of elliptical cross section. Analysis of the time reversal operator eigenmodes has been well-studied for symmetric response matrices associated with symmetric systems of scattering targets. We will expand these previous results to include more general scattering systems leading to asymmetric response matrices, for which the analytical complexity increases but the physical interpretation of the time reversal operator remains unchanged. For asymmetric responses, the qualitative properties of the time reversal operator eigenmodes remain consistent with those obtained from the more tightly constrained systems.

  11. Algorithms and Architectures for Elastic-Wave Inversion Final Report CRADA No. TC02144.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, S.; Lindtjorn, O.

    2017-08-15

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Schlumberger Technology Corporation (STC), to perform a computational feasibility study that investigates hardware platforms and software algorithms applicable to STC for Reverse Time Migration (RTM) / Reverse Time Inversion (RTI) of 3-D seismic data.

  12. Designing a multistage supply chain in cross-stage reverse logistics environments: application of particle swarm optimization algorithms.

    PubMed

    Chiang, Tzu-An; Che, Z H; Cui, Zhihua

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V(Max) method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.

  13. Designing a Multistage Supply Chain in Cross-Stage Reverse Logistics Environments: Application of Particle Swarm Optimization Algorithms

    PubMed Central

    Chiang, Tzu-An; Che, Z. H.

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V Max method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did. PMID:24772026

  14. Static Performance of a Wing-Mounted Thrust Reverser Concept

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.; Yetter, Jeffrey A.

    1998-01-01

    An experimental investigation was conducted in the Jet-Exit Test Facility at NASA Langley Research Center to study the static aerodynamic performance of a wing-mounted thrust reverser concept applicable to subsonic transport aircraft. This innovative engine powered thrust reverser system is designed to utilize wing-mounted flow deflectors to produce aircraft deceleration forces. Testing was conducted using a 7.9%-scale exhaust system model with a fan-to-core bypass ratio of approximately 9.0, a supercritical left-hand wing section attached via a pylon, and wing-mounted flow deflectors attached to the wing section. Geometric variations of key design parameters investigated for the wing-mounted thrust reverser concept included flow deflector angle and chord length, deflector edge fences, and the yaw mount angle of the deflector system (normal to the engine centerline or parallel to the wing trailing edge). All tests were conducted with no external flow and high pressure air was used to simulate core and fan engine exhaust flows. Test results indicate that the wing-mounted thrust reverser concept can achieve overall thrust reverser effectiveness levels competitive with (parallel mount), or better than (normal mount) a conventional cascade thrust reverser system. By removing the thrust reverser system from the nacelle, the wing-mounted concept offers the nacelle designer more options for improving nacelle aero dynamics and propulsion-airframe integration, simplifying nacelle structural designs, reducing nacelle weight, and improving engine maintenance access.

  15. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Chengguang; Drinkwater, Bruce W.

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method.more » However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.« less

  16. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  17. Reverse time migration by Krylov subspace reduced order modeling

    NASA Astrophysics Data System (ADS)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  18. PageRank and rank-reversal dependence on the damping factor

    NASA Astrophysics Data System (ADS)

    Son, S.-W.; Christensen, C.; Grassberger, P.; Paczuski, M.

    2012-12-01

    PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d0=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d0.

  19. Exact Synthesis of Reversible Circuits Using A* Algorithm

    NASA Astrophysics Data System (ADS)

    Datta, K.; Rathi, G. K.; Sengupta, I.; Rahaman, H.

    2015-06-01

    With the growing emphasis on low-power design methodologies, and the result that theoretical zero power dissipation is possible only if computations are information lossless, design and synthesis of reversible logic circuits have become very important in recent years. Reversible logic circuits are also important in the context of quantum computing, where the basic operations are reversible in nature. Several synthesis methodologies for reversible circuits have been reported. Some of these methods are termed as exact, where the motivation is to get the minimum-gate realization for a given reversible function. These methods are computationally very intensive, and are able to synthesize only very small functions. There are other methods based on function transformations or higher-level representation of functions like binary decision diagrams or exclusive-or sum-of-products, that are able to handle much larger circuits without any guarantee of optimality or near-optimality. Design of exact synthesis algorithms is interesting in this context, because they set some kind of benchmarks against which other methods can be compared. This paper proposes an exact synthesis approach based on an iterative deepening version of the A* algorithm using the multiple-control Toffoli gate library. Experimental results are presented with comparisons with other exact and some heuristic based synthesis approaches.

  20. Reverse Flow Engine Core Having a Ducted Fan with Integrated Secondary Flow Blades

    NASA Technical Reports Server (NTRS)

    Kisska, Michael K. (Inventor); Princen, Norman H. (Inventor); Kuehn, Mark S. (Inventor); Cosentino, Gary B. (Inventor)

    2014-01-01

    Secondary air flow is provided for a ducted fan having a reverse flow turbine engine core driving a fan blisk. The fan blisk incorporates a set of thrust fan blades extending from an outer hub and a set of integral secondary flow blades extending intermediate an inner hub and the outer hub. A nacelle provides an outer flow duct for the thrust fan blades and a secondary flow duct carries flow from the integral secondary flow blades as cooling air for components of the reverse flow turbine engine.

  1. Application of reverse engineering in the medical industry.

    NASA Astrophysics Data System (ADS)

    Kaleev, A. A.; Kashapov, L. N.; Kashapov, N. F.; Kashapov, R. N.

    2017-09-01

    The purpose of this research is to develop on the basis of existing analogs new design of ophthalmologic microsurgical tweezers by using reverse engineering techniques. Virtual model was obtained by using a three-dimensional scanning system Solutionix Rexcan 450 MP. Geomagic Studio program was used to remove defects and inaccuracies of the obtained parametric model. A prototype of the finished model was made on the installation of laser stereolithography Projet 6000. Total time of the creation was 16 hours from the reverse engineering procedure to 3D-printing of the prototype.

  2. System for Anomaly and Failure Detection (SAFD) system development

    NASA Technical Reports Server (NTRS)

    Oreilly, D.

    1992-01-01

    This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.

  3. A real time microcomputer implementation of sensor failure detection for turbofan engines

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1989-01-01

    An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.

  4. Expanding Metabolic Engineering Algorithms Using Feasible Space and Shadow Price Constraint Modules

    PubMed Central

    Tervo, Christopher J.; Reed, Jennifer L.

    2014-01-01

    While numerous computational methods have been developed that use genome-scale models to propose mutants for the purpose of metabolic engineering, they generally compare mutants based on a single criteria (e.g., production rate at a mutant’s maximum growth rate). As such, these approaches remain limited in their ability to include multiple complex engineering constraints. To address this shortcoming, we have developed feasible space and shadow price constraint (FaceCon and ShadowCon) modules that can be added to existing mixed integer linear adaptive evolution metabolic engineering algorithms, such as OptKnock and OptORF. These modules allow strain designs to be identified amongst a set of multiple metabolic engineering algorithm solutions that are capable of high chemical production while also satisfying additional design criteria. We describe the various module implementations and their potential applications to the field of metabolic engineering. We then incorporated these modules into the OptORF metabolic engineering algorithm. Using an Escherichia coli genome-scale model (iJO1366), we generated different strain designs for the anaerobic production of ethanol from glucose, thus demonstrating the tractability and potential utility of these modules in metabolic engineering algorithms. PMID:25478320

  5. Theory and Applications of Computational Time-Reversal Imaging

    DTIC Science & Technology

    2007-05-03

    experimental data collected by a research team from Carnegie Mellon University illustrating the use of the algorithms developed in the project. The final...2.1 Early Results from CMU experimental data ..... ................... 4 2.1.1 Basic Time Reversal Imaging ....... ...................... 4 2.1.2 Time... experimental data collected by Carnegie Mellon University illustrating the use of the algorithms developed in the project. 15. SUBJECT TERMS 16. SECURITY

  6. Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators

    NASA Technical Reports Server (NTRS)

    Fantini, Jay A.

    1998-01-01

    Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.

  7. Annealed Importance Sampling Reversible Jump MCMC algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappingsmore » underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.« less

  8. Algorithm Engineering: Concepts and Practice

    NASA Astrophysics Data System (ADS)

    Chimani, Markus; Klein, Karsten

    Over the last years the term algorithm engineering has become wide spread synonym for experimental evaluation in the context of algorithm development. Yet it implies even more. We discuss the major weaknesses of traditional "pen and paper" algorithmics and the ever-growing gap between theory and practice in the context of modern computer hardware and real-world problem instances. We present the key ideas and concepts of the central algorithm engineering cycle that is based on a full feedback loop: It starts with the design of the algorithm, followed by the analysis, implementation, and experimental evaluation. The results of the latter can then be reused for modifications to the algorithmic design, stronger or input-specific theoretic performance guarantees, etc. We describe the individual steps of the cycle, explaining the rationale behind them and giving examples of how to conduct these steps thoughtfully. Thereby we give an introduction to current algorithmic key issues like I/O-efficient or parallel algorithms, succinct data structures, hardware-aware implementations, and others. We conclude with two especially insightful success stories—shortest path problems and text search—where the application of algorithm engineering techniques led to tremendous performance improvements compared with previous state-of-the-art approaches.

  9. Reverse and forward engineering of protein pattern formation.

    PubMed

    Kretschmer, Simon; Harrington, Leon; Schwille, Petra

    2018-05-26

    Living systems employ protein pattern formation to regulate important life processes in space and time. Although pattern-forming protein networks have been identified in various prokaryotes and eukaryotes, their systematic experimental characterization is challenging owing to the complex environment of living cells. In turn, cell-free systems are ideally suited for this goal, as they offer defined molecular environments that can be precisely controlled and manipulated. Towards revealing the molecular basis of protein pattern formation, we outline two complementary approaches: the biochemical reverse engineering of reconstituted networks and the de novo design, or forward engineering, of artificial self-organizing systems. We first illustrate the reverse engineering approach by the example of the Escherichia coli Min system, a model system for protein self-organization based on the reversible and energy-dependent interaction of the ATPase MinD and its activating protein MinE with a lipid membrane. By reconstituting MinE mutants impaired in ATPase stimulation, we demonstrate how large-scale Min protein patterns are modulated by MinE activity and concentration. We then provide a perspective on the de novo design of self-organizing protein networks. Tightly integrated reverse and forward engineering approaches will be key to understanding and engineering the intriguing phenomenon of protein pattern formation.This article is part of the theme issue 'Self-organization in cell biology'. © 2018 The Author(s).

  10. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  11. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  12. An algorithm to estimate aircraft cruise black carbon emissions for use in developing a cruise emissions inventory.

    PubMed

    Peck, Jay; Oluwole, Oluwayemisi O; Wong, Hsi-Wu; Miake-Lye, Richard C

    2013-03-01

    To provide accurate input parameters to the large-scale global climate simulation models, an algorithm was developed to estimate the black carbon (BC) mass emission index for engines in the commercial fleet at cruise. Using a high-dimensional model representation (HDMR) global sensitivity analysis, relevant engine specification/operation parameters were ranked, and the most important parameters were selected. Simple algebraic formulas were then constructed based on those important parameters. The algorithm takes the cruise power (alternatively, fuel flow rate), altitude, and Mach number as inputs, and calculates BC emission index for a given engine/airframe combination using the engine property parameters, such as the smoke number, available in the International Civil Aviation Organization (ICAO) engine certification databank. The algorithm can be interfaced with state-of-the-art aircraft emissions inventory development tools, and will greatly improve the global climate simulations that currently use a single fleet average value for all airplanes. An algorithm to estimate the cruise condition black carbon emission index for commercial aircraft engines was developed. Using the ICAO certification data, the algorithm can evaluate the black carbon emission at given cruise altitude and speed.

  13. Efficient Green's Function Reaction Dynamics (GFRD) simulations for diffusion-limited, reversible reactions

    NASA Astrophysics Data System (ADS)

    Bashardanesh, Zahedeh; Lötstedt, Per

    2018-03-01

    In diffusion controlled reversible bimolecular reactions in three dimensions, a dissociation step is typically followed by multiple, rapid re-association steps slowing down the simulations of such systems. In order to improve the efficiency, we first derive an exact Green's function describing the rate at which an isolated pair of particles undergoing reversible bimolecular reactions and unimolecular decay separates beyond an arbitrarily chosen distance. Then the Green's function is used in an algorithm for particle-based stochastic reaction-diffusion simulations for prediction of the dynamics of biochemical networks. The accuracy and efficiency of the algorithm are evaluated using a reversible reaction and a push-pull chemical network. The computational work is independent of the rates of the re-associations.

  14. A Reversible Logical Circuit Synthesis Algorithm Based on Decomposition of Cycle Representations of Permutations

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Li, Zhiqiang; Zhang, Gaoman; Pan, Suhan; Zhang, Wei

    2018-05-01

    A reversible function is isomorphic to a permutation and an arbitrary permutation can be represented by a series of cycles. A new synthesis algorithm for 3-qubit reversible circuits was presented. It consists of two parts, the first part used the Number of reversible function's Different Bits (NDBs) to decide whether the NOT gate should be added to decrease the Hamming distance of the input and output vectors; the second part was based on the idea of exploring properties of the cycle representation of permutations, decomposed the cycles to make the permutation closer to the identity permutation and finally turn into the identity permutation, it was realized by using totally controlled Toffoli gates with positive and negative controls.

  15. Dynamics of high-bypass-engine thrust reversal using a variable-pitch fan

    NASA Technical Reports Server (NTRS)

    Schaefer, J. W.; Sagerser, D. R.; Stakolich, E. G.

    1977-01-01

    The test program demonstrated that successful and rapid forward-to reverse-thrust transients can be performed without any significant engine operational limitations for fan blade pitch changes through either feather pitch or flat pitch. For through-feather-pitch operation with a flight inlet, fan stall problems were encountered, and a fan blade overshoot technique was used to establish reverse thrust.

  16. Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1992-01-01

    Results are presented from the evaluation of the performance seeking control (PSC) optimization algorithm developed by Smith et al. (1990) for F-15 aircraft, which optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. Comparisons are presented between the load cell measurements, PSC onboard model thrust calculations, and posttest state variable model computations. Actual performance improvements using the PSC algorithm are presented for its various modes. The results of using PSC algorithm are compared with similar test case results using the HIDEC algorithm.

  17. Building of Reusable Reverse Logistics Model and its Optimization Considering the Decision of Backorder or Next Arrival of Goods

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu; Lee, Hee-Hyol

    This paper deals with the building of the reusable reverse logistics model considering the decision of the backorder or the next arrival of goods. The optimization method to minimize the transportation cost and to minimize the volume of the backorder or the next arrival of goods occurred by the Just in Time delivery of the final delivery stage between the manufacturer and the processing center is proposed. Through the optimization algorithms using the priority-based genetic algorithm and the hybrid genetic algorithm, the sub-optimal delivery routes are determined. Based on the case study of a distilling and sale company in Busan in Korea, the new model of the reusable reverse logistics of empty bottles is built and the effectiveness of the proposed method is verified.

  18. Reverse engineering biomolecular systems using -omic data: challenges, progress and opportunities.

    PubMed

    Quo, Chang F; Kaddi, Chanchala; Phan, John H; Zollanvari, Amin; Xu, Mingqing; Wang, May D; Alterovitz, Gil

    2012-07-01

    Recent advances in high-throughput biotechnologies have led to the rapid growing research interest in reverse engineering of biomolecular systems (REBMS). 'Data-driven' approaches, i.e. data mining, can be used to extract patterns from large volumes of biochemical data at molecular-level resolution while 'design-driven' approaches, i.e. systems modeling, can be used to simulate emergent system properties. Consequently, both data- and design-driven approaches applied to -omic data may lead to novel insights in reverse engineering biological systems that could not be expected before using low-throughput platforms. However, there exist several challenges in this fast growing field of reverse engineering biomolecular systems: (i) to integrate heterogeneous biochemical data for data mining, (ii) to combine top-down and bottom-up approaches for systems modeling and (iii) to validate system models experimentally. In addition to reviewing progress made by the community and opportunities encountered in addressing these challenges, we explore the emerging field of synthetic biology, which is an exciting approach to validate and analyze theoretical system models directly through experimental synthesis, i.e. analysis-by-synthesis. The ultimate goal is to address the present and future challenges in reverse engineering biomolecular systems (REBMS) using integrated workflow of data mining, systems modeling and synthetic biology.

  19. Computational metabolic engineering strategies for growth-coupled biofuel production by Synechocystis.

    PubMed

    Shabestary, Kiyan; Hudson, Elton P

    2016-12-01

    Chemical and fuel production by photosynthetic cyanobacteria is a promising technology but to date has not reached competitive rates and titers. Genome-scale metabolic modeling can reveal limitations in cyanobacteria metabolism and guide genetic engineering strategies to increase chemical production. Here, we used constraint-based modeling and optimization algorithms on a genome-scale model of Synechocystis PCC6803 to find ways to improve productivity of fermentative, fatty-acid, and terpene-derived fuels. OptGene and MOMA were used to find heuristics for knockout strategies that could increase biofuel productivity. OptKnock was used to find a set of knockouts that led to coupling between biofuel and growth. Our results show that high productivity of fermentation or reversed beta-oxidation derived alcohols such as 1-butanol requires elimination of NADH sinks, while terpenes and fatty-acid based fuels require creating imbalances in intracellular ATP and NADPH production and consumption. The FBA-predicted productivities of these fuels are at least 10-fold higher than those reported so far in the literature. We also discuss the physiological and practical feasibility of implementing these knockouts. This work gives insight into how cyanobacteria could be engineered to reach competitive biofuel productivities.

  20. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  1. 28. MESTA STEAM ENGINE, INSTALLED BY THE CORRIGAN, McKINNEY COMPANY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. MESTA STEAM ENGINE, INSTALLED BY THE CORRIGAN, McKINNEY COMPANY IN 1916, STILL DRIVES THE 44-INCH REVERSING BLOOMING MILL. THE TWIN TANDAM, COMPOUND CONDENSING, REVERSING STEAM ENGINE HAS A RATED CAPACITY OF 35,000 H.P. IT WAS BUILT BY THE MESTA MACHINE COMPANY OF PITTSBURGH. - Corrigan, McKinney Steel Company, 3100 East Forty-fifth Street, Cleveland, Cuyahoga County, OH

  2. Navigating a ship with a broken compass: evaluating standard algorithms to measure patient safety.

    PubMed

    Hefner, Jennifer L; Huerta, Timothy R; McAlearney, Ann Scheck; Barash, Barbara; Latimer, Tina; Moffatt-Bruce, Susan D

    2017-03-01

    Agency for Healthcare Research and Quality (AHRQ) software applies standardized algorithms to hospital administrative data to identify patient safety indicators (PSIs). The objective of this study was to assess the validity of PSI flags and report reasons for invalid flagging. At a 6-hospital academic medical center, a retrospective analysis was conducted of all PSIs flagged in fiscal year 2014. A multidisciplinary PSI Quality Team reviewed each flagged PSI based on quarterly reports. The positive predictive value (PPV, the percent of clinically validated cases) was calculated for 12 PSI categories. The documentation for each reversed case was reviewed to determine the reasons for PSI reversal. Of 657 PSI flags, 185 were reversed. Seven PSI categories had a PPV below 75%. Four broad categories of reasons for reversal were AHRQ algorithm limitations (38%), coding misinterpretations (45%), present upon admission (10%), and documentation insufficiency (7%). AHRQ algorithm limitations included 2 subcategories: an "incident" was inherent to the procedure, or highly likely (eg, vascular tumor bleed), or an "incident" was nonsignificant, easily controlled, and/or no intervention was needed. These findings support previous research highlighting administrative data problems. Additionally, AHRQ algorithm limitations was an emergent category not considered in previous research. Herein we present potential solutions to address these issues. If, despite poor validity, US policy continues to rely on PSIs for incentive and penalty programs, improvements are needed in the quality of administrative data and the standardized PSI algorithms. These solutions require national motivation, research attention, and dissemination support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Efficient hybrid non-equilibrium molecular dynamics--Monte Carlo simulations with symmetric momentum reversal.

    PubMed

    Chen, Yunjie; Roux, Benoît

    2014-09-21

    Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.

  4. Efficient hybrid non-equilibrium molecular dynamics - Monte Carlo simulations with symmetric momentum reversal

    NASA Astrophysics Data System (ADS)

    Chen, Yunjie; Roux, Benoît

    2014-09-01

    Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.

  5. Analysis and documentation of QCSEE (Quiet Clean Short-haul Experimental Engine) over-the-wing exhaust system development

    NASA Technical Reports Server (NTRS)

    Ammer, R. C.; Kutney, J. T.

    1977-01-01

    A static scale model test program was conducted in the static test area of the NASA-Langley 9.14- by 18.29 m(30- by 60-ft) Full-Scale Wind Tunnel Facility to develop an over-the-wing (OTW) nozzle and reverser configuration for the Quiet Clean Short-Haul Experimental Engine (QCSEE). Three nozzles and one basic reverser configuration were tested over the QCSEE takeoff and approach power nozzle pressure ratio range between 1.1 and 1.3. The models were scaled to 8.53% of QCSEE engine size and tested behind two 13.97-cm (5.5-in.) diameter tip-turbine-driven fan simulators coupled in tandem. An OTW nozzle and reverser configuration was identified which satisfies the QCSEE experimental engine requirements in terms of nozzle cycle area variation capability and reverse thrust level, and provides good jet flow spreading over a wing upper surface for achievement of high propulsive lift performance.

  6. Quantum engine efficiency bound beyond the second law of thermodynamics.

    PubMed

    Niedenzu, Wolfgang; Mukherjee, Victor; Ghosh, Arnab; Kofman, Abraham G; Kurizki, Gershon

    2018-01-11

    According to the second law, the efficiency of cyclic heat engines is limited by the Carnot bound that is attained by engines that operate between two thermal baths under the reversibility condition whereby the total entropy does not increase. Quantum engines operating between a thermal and a squeezed-thermal bath have been shown to surpass this bound. Yet, their maximum efficiency cannot be determined by the reversibility condition, which may yield an unachievable efficiency bound above unity. Here we identify the fraction of the exchanged energy between a quantum system and a bath that necessarily causes an entropy change and derive an inequality for this change. This inequality reveals an efficiency bound for quantum engines energised by a non-thermal bath. This bound does not imply reversibility, unless the two baths are thermal. It cannot be solely deduced from the laws of thermodynamics.

  7. Numerical Prediction of the Influence of Thrust Reverser on Aeroengine's Aerodynamic Stability

    NASA Astrophysics Data System (ADS)

    Zhiqiang, Wang; Xigang, Shen; Jun, Hu; Xiang, Gao; Liping, Liu

    2017-11-01

    A numerical method was developed to predict the aerodynamic stability of a high bypass ratio turbofan engine, at the landing stage of a large transport aircraft, when the thrust reverser was deployed. 3D CFD simulation and 2D aeroengine aerodynamic stability analysis code were performed in this work, the former is to achieve distortion coefficient for the analysis of engine stability. The 3D CFD simulation was divided into two steps, the single engine calculation and the integrated aircraft and engine calculation. Results of the CFD simulation show that with the decreasing of relative wind Mach number, the engine inlet will suffer more severe flow distortion. The total pressure and total temperature distortion coefficients at the inlet of the engines were obtained from the results of the numerical simulation. Then an aeroengine aerodynamic stability analysis program was used to quantitatively analyze the aerodynamic stability of the high bypass ratio turbofan engine. The results of the stability analysis show that the engine can work stably, when the reverser flow is re-ingested. But the anti-distortion ability of the booster is weaker than that of the fan and high pressure compressor. It is a weak link of engine stability.

  8. An efficient reversible privacy-preserving data mining technology over data streams.

    PubMed

    Lin, Chen-Yi; Kao, Yuan-Hung; Lee, Wei-Bin; Chen, Rong-Chang

    2016-01-01

    With the popularity of smart handheld devices and the emergence of cloud computing, users and companies can save various data, which may contain private data, to the cloud. Topics relating to data security have therefore received much attention. This study focuses on data stream environments and uses the concept of a sliding window to design a reversible privacy-preserving technology to process continuous data in real time, known as a continuous reversible privacy-preserving (CRP) algorithm. Data with CRP algorithm protection can be accurately recovered through a data recovery process. In addition, by using an embedded watermark, the integrity of the data can be verified. The results from the experiments show that, compared to existing algorithms, CRP is better at preserving knowledge and is more effective in terms of reducing information loss and privacy disclosure risk. In addition, it takes far less time for CRP to process continuous data than existing algorithms. As a result, CRP is confirmed as suitable for data stream environments and fulfills the requirements of being lightweight and energy-efficient for smart handheld devices.

  9. Wind tunnel test of model target thrust reversers for the Pratt and Whitney aircraft JT8D-100 series engines installed on a 727-200 airplane

    NASA Technical Reports Server (NTRS)

    Hambly, D.

    1974-01-01

    The results of a low speed wind tunnel test of 0.046 scale model target thrust reversers installed on a 727-200 model airplane are presented. The full airplane model was mounted on a force balance, except for the nacelles and thrust reversers, which were independently mounted and isolated from it. The installation had the capability of simulating the inlet airflows and of supplying the correct proportions of primary and secondary air to the nozzles. The objectives of the test were to assess the compatibility of the thrust reversers target door design with the engine and airplane. The following measurements were made: hot gas ingestion at the nacelle inlets; model lift, drag, and pitching moment; hot gas impingement on the airplane structure; and qualitative assessment of the rudder effectiveness. The major parameters controlling hot gas ingestion were found to be thrust reverser orientation, engine power setting, and the lip height of the bottom thrust reverser doors on the side nacelles. The thrust reversers tended to increase the model lift, decrease the drag, and decrease the pitching moment.

  10. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    PubMed

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.

  11. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926

  12. Reverse engineering systems models of regulation: discovery, prediction and mechanisms.

    PubMed

    Ashworth, Justin; Wurtmann, Elisabeth J; Baliga, Nitin S

    2012-08-01

    Biological systems can now be understood in comprehensive and quantitative detail using systems biology approaches. Putative genome-scale models can be built rapidly based upon biological inventories and strategic system-wide molecular measurements. Current models combine statistical associations, causative abstractions, and known molecular mechanisms to explain and predict quantitative and complex phenotypes. This top-down 'reverse engineering' approach generates useful organism-scale models despite noise and incompleteness in data and knowledge. Here we review and discuss the reverse engineering of biological systems using top-down data-driven approaches, in order to improve discovery, hypothesis generation, and the inference of biological properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Development of a simple algorithm to guide the effective management of traumatic cardiac arrest.

    PubMed

    Lockey, David J; Lyon, Richard M; Davies, Gareth E

    2013-06-01

    Major trauma is the leading worldwide cause of death in young adults. The mortality from traumatic cardiac arrest remains high but survival with good neurological outcome from cardiopulmonary arrest following major trauma has been regularly reported. Rapid, effective intervention is required to address potential reversible causes of traumatic cardiac arrest if the victim is to survive. Current ILCOR guidelines do not contain a standard algorithm for management of traumatic cardiac arrest. We present a simple algorithm to manage the major trauma patient in actual or imminent cardiac arrest. We reviewed the published English language literature on traumatic cardiac arrest and major trauma management. A treatment algorithm was developed based on this and the experience of treatment of more than a thousand traumatic cardiac arrests by a physician - paramedic pre-hospital trauma service. The algorithm addresses the need treat potential reversible causes of traumatic cardiac arrest. This includes immediate resuscitative thoracotomy in cases of penetrating chest trauma, airway management, optimising oxygenation, correction of hypovolaemia and chest decompression to exclude tension pneumothorax. The requirement to rapidly address a number of potentially reversible pathologies in a short time period lends the management of traumatic cardiac arrest to a simple treatment algorithm. A standardised approach may prevent delay in diagnosis and treatment and improve current poor survival rates. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. An algorithm to compute the sequency ordered Walsh transform

    NASA Technical Reports Server (NTRS)

    Larsen, H.

    1976-01-01

    A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.

  15. Laser vibrometry exploitation for vehicle identification

    NASA Astrophysics Data System (ADS)

    Nolan, Adam; Lingg, Andrew; Goley, Steve; Sigmund, Kevin; Kangas, Scott

    2014-06-01

    Vibration signatures sensed from distant vehicles using laser vibrometry systems provide valuable information that may be used to help identify key vehicle features such as engine type, engine speed, and number of cylinders. Through the use of physics models of the vibration phenomenology, features are chosen to support classification algorithms. Various individual exploitation algorithms were developed using these models to classify vibration signatures into engine type (piston vs. turbine), engine configuration (Inline 4 vs. Inline 6 vs. V6 vs. V8 vs. V12) and vehicle type. The results of these algorithms will be presented for an 8 class problem. Finally, the benefits of using a factor graph representation to link these independent algorithms together will be presented which constructs a classification hierarchy for the vibration exploitation problem.

  16. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm.

    PubMed

    Ulusoy, Melda; Sipahi, Rifat

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as "reversal". Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one's finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects' performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects' Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise.

  17. Algorithmic Complexity. Volume II.

    DTIC Science & Technology

    1982-06-01

    digital computers, this improvement will go unnoticed if only a few complex products are to be taken, however it can become increasingly important as...computed in the reverse order. If the products are formed moving from the top of the tree downward, and then the divisions are performed going from the...the reverse order, going up the tree. (r- a mod m means that r is the remainder when a is divided by M.) The overall running time of the algorithm is

  18. Inventing the Invented for STEM Understanding

    ERIC Educational Resources Information Center

    Stansell, Alicia; Tyler-Wood, Tandra; Stansell, Christina

    2016-01-01

    The reverse engineering of simple inventions that were of historic significance is now possible in a classroom by using digital models provided by places like the Smithsonian. The digital models can facilitate the mastery of students' STEM learning by utilizing digital fabrication in maker spaces to provide an opportunity for reverse engineer and…

  19. Blade Sections in Streamwise Oscillations into Reverse Flow

    DTIC Science & Technology

    2015-05-07

    NC 27709-2211 Reverse Flow, Oscillating Airfoils , Oscillating Freesteam REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR...plate or bluff body rather than an airfoil . Reverse flow operation requires investigation and quantification to accurately capture these Submitted for... airfoil integrated quantities (lift, drag, moment) in reverse flow and developed new algorithms for comprehensive codes, reducing errors from 30 %–50

  20. Advanced detection, isolation, and accommodation of sensor failures in turbofan engines: Real-time microcomputer implementation

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1990-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.

  1. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  2. Computer-aided dental prostheses construction using reverse engineering.

    PubMed

    Solaberrieta, E; Minguez, R; Barrenetxea, L; Sierra, E; Etxaniz, O

    2014-01-01

    The implementation of computer-aided design/computer-aided manufacturing (CAD/CAM) systems with virtual articulators, which take into account the kinematics, constitutes a breakthrough in the construction of customised dental prostheses. This paper presents a multidisciplinary protocol involving CAM techniques to produce dental prostheses. This protocol includes a step-by-step procedure using innovative reverse engineering technologies to transform completely virtual design processes into customised prostheses. A special emphasis is placed on a novel method that permits a virtual location of the models. The complete workflow includes the optical scanning of the patient, the use of reverse engineering software and, if necessary, the use of rapid prototyping to produce CAD temporary prostheses.

  3. NASA Tech Briefs, June 2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Topics covered include: COTS MEMS Flow-Measurement Probes; Measurement of an Evaporating Drop on a Reflective Substrate; Airplane Ice Detector Based on a Microwave Transmission Line; Microwave/Sonic Apparatus Measures Flow and Density in Pipe; Reducing Errors by Use of Redundancy in Gravity Measurements; Membrane-Based Water Evaporator for a Space Suit; Compact Microscope Imaging System with Intelligent Controls; Chirped-Superlattice, Blocked-Intersubband QWIP; Charge-Dissipative Electrical Cables; Deep-Sea Video Cameras Without Pressure Housings; RFID and Memory Devices Fabricated Integrally on Substrates; Analyzing Dynamics of Cooperating Spacecraft; Spacecraft Attitude Maneuver Planning Using Genetic Algorithms; Forensic Analysis of Compromised Computers; Document Concurrence System; Managing an Archive of Images; MPT Prediction of Aircraft-Engine Fan Noise; Improving Control of Two Motor Controllers; Electro-deionization Using Micro-separated Bipolar Membranes; Safer Electrolytes for Lithium-Ion Cells; Rotating Reverse-Osmosis for Water Purification; Making Precise Resonators for Mesoscale Vibratory Gyroscopes; Robotic End Effectors for Hard-Rock Climbing; Improved Nutation Damper for a Spin-Stabilized Spacecraft; Exhaust Nozzle for a Multitube Detonative Combustion Engine; Arc-Second Pointer for Balloon-Borne Astronomical Instrument; Compact, Automated Centrifugal Slide-Staining System; Two-Armed, Mobile, Sensate Research Robot; Compensating for Effects of Humidity on Electronic Noses; Brush/Fin Thermal Interfaces; Multispectral Scanner for Monitoring Plants; Coding for Communication Channels with Dead-Time Constraints; System for Better Spacing of Airplanes En Route; Algorithm for Training a Recurrent Multilayer Perceptron; Orbiter Interface Unit and Early Communication System; White-Light Nulling Interferometers for Detecting Planets; and Development of Methodology for Programming Autonomous Agents.

  4. Quiet Clean Short-Haul Experimental Engine (QCSEE) acoustic and aerodynamic tests on a scale model over-the-wing thrust reverser and forward thrust nozzle

    NASA Technical Reports Server (NTRS)

    Stimpert, D. L.

    1978-01-01

    An acoustic and aerodynamic test program was conducted on a 1/6.25 scale model of the Quiet, Clean, Short-Haul Experimental Engine (QCSEE) forward thrust over-the-wing (OTW) nozzle and OTW thrust reverser. In reverse thrust, the effect of reverser geometry was studied by parametric variations in blocker spacing, blocker height, lip angle, and lip length. Forward thrust nozzle tests determined the jet noise levels of the cruise and takeoff nozzles, the effect of opening side doors to achieve takeoff thrust, and scrubbing noise of the cruise and takeoff jet on a simulated wing surface. Velocity profiles are presented for both forward and reverse thrust nozzles. An estimate of the reverse thrust was made utilizing the measured centerline turning angle.

  5. 26 CFR 1.861-18 - Classification of transactions involving computer programs.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... on a single disk for a one-time payment with restrictions on transfer and reverse engineering, which... license. The license is stated to be perpetual. Under the license no reverse engineering, decompilation... fee, on a World Wide Web home page on the Internet. P, the Country Z resident, in return for payment...

  6. 26 CFR 1.861-18 - Classification of transactions involving computer programs.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... on a single disk for a one-time payment with restrictions on transfer and reverse engineering, which... license. The license is stated to be perpetual. Under the license no reverse engineering, decompilation... fee, on a World Wide Web home page on the Internet. P, the Country Z resident, in return for payment...

  7. Recognition vs Reverse Engineering in Boolean Concepts Learning

    ERIC Educational Resources Information Center

    Shafat, Gabriel; Levin, Ilya

    2012-01-01

    This paper deals with two types of logical problems--recognition problems and reverse engineering problems, and with the interrelations between these types of problems. The recognition problems are modeled in the form of a visual representation of various objects in a common pattern, with a composition of represented objects in the pattern.…

  8. Reverse Engineering Course at Philadelphia University in Jordan

    ERIC Educational Resources Information Center

    Younis, M. Bani; Tutunji, T.

    2012-01-01

    Reverse engineering (RE) is the process of testing and analysing a system or a device in order to identify, understand and document its functionality. RE is an efficient tool in industrial benchmarking where competitors' products are dissected and evaluated for performance and costs. RE can play an important role in the re-configuration and…

  9. Teach CAD and Measuring Skills through Reverse Engineering

    ERIC Educational Resources Information Center

    Board, Keith

    2012-01-01

    This article describes a reverse engineering activity that gives students hands-on, minds-on experience with measuring tools, machine parts, and CAD. The author developed this activity to give students an abundance of practical experience with measuring tools. Equally important, it provides a good interface between the virtual world of CAD 3D…

  10. A Predictive Approach to Network Reverse-Engineering

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2005-03-01

    A central challenge of systems biology is the ``reverse engineering" of transcriptional networks: inferring which genes exert regulatory control over which other genes. Attempting such inference at the genomic scale has only recently become feasible, via data-intensive biological innovations such as DNA microrrays (``DNA chips") and the sequencing of whole genomes. In this talk we present a predictive approach to network reverse-engineering, in which we integrate DNA chip data and sequence data to build a model of the transcriptional network of the yeast S. cerevisiae capable of predicting the response of genes in unseen experiments. The technique can also be used to extract ``motifs,'' sequence elements which act as binding sites for regulatory proteins. We validate by a number of approaches and present comparison of theoretical prediction vs. experimental data, along with biological interpretations of the resulting model. En route, we will illustrate some basic notions in statistical learning theory (fitting vs. over-fitting; cross- validation; assessing statistical significance), highlighting ways in which physicists can make a unique contribution in data- driven approaches to reverse engineering.

  11. Online Normalization Algorithm for Engine Turbofan Monitoring

    DTIC Science & Technology

    2014-10-02

    Online Normalization Algorithm for Engine Turbofan Monitoring Jérôme Lacaille 1 , Anastasios Bellas 2 1 Snecma, 77550 Moissy-Cramayel, France...understand the behavior of a turbofan engine, one first needs to deal with the variety of data acquisition contexts. Each time a set of measurements is...it auto-adapts itself with piecewise linear models. 1. INTRODUCTION Turbofan engine abnormality diagnosis uses three steps: reduction of

  12. Numerical Roll Reversal Predictor Corrector Aerocapture and Precision Landing Guidance Algorithms for the Mars Surveyor Program 2001 Missions

    NASA Technical Reports Server (NTRS)

    Powell, Richard W.

    1998-01-01

    This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.

  13. A preliminary evaluation of an F100 engine parameter estimation process using flight data

    NASA Technical Reports Server (NTRS)

    Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.

    1990-01-01

    The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.

  14. A preliminary evaluation of an F100 engine parameter estimation process using flight data

    NASA Technical Reports Server (NTRS)

    Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.

    1990-01-01

    The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.

  15. UAV Mission Optimization through Hybrid-Electric Propulsion

    NASA Astrophysics Data System (ADS)

    Blackwelder, Philip Scott

    Hybrid-electric powertrain leverages the superior range of petrol based systems with the quiet and emission free benefits of electric propulsion. The major caveat to hybrid-electric powertrain in an airplane is that it is inherently heavier than conventional petroleum powertrain due mostly to the low energy density of battery technology. The first goal of this research is to develop mission planning code to match powertrain components for a small-scale unmanned aerial vehicle (UAV) to complete a standard surveillance mission within a set of user input parameters. The second goal is to promote low acoustic profile loitering through mid-flight engine starting. The two means by which midmission engine starting will be addressed is through reverse thrust from the propeller and a servo actuated gear to couple and decouple the engine and motor. The mission planning code calculates the power required to complete a mission and assists the user in sourcing powertrain components including the propeller, motor, battery, motor controller, engine and fuel. Reverse thrust engine starting involves characterizing an off the shelf variable pitch propeller and using its torque coefficient to calculate the advance ratio required to provide sufficient torque and speed to start an engine. Geared engine starting works like the starter in a conventional automobile. A servo actuated gear will couple the motor to the engine to start it and decouple once the engine has started. Reverse thrust engine starting was unsuccessful due to limitations of available off the shelf variable pitch propellers. However, reverse thrust engine starting could be realized through a custom larger diameter propeller. Geared engine starting was a success, though the system was unable to run fully as intended. Due to counter-clockwise crank rotation of the engine and the right-hand threads on the crankshaft, cranking the engine resulted in the nut securing the engine starter gear to back off as the engine cranked. A second nut was added to secure the starter gear but at the expense of removing the engine drive pulley. Removing the engine pulley meant that the starter gear must remain engaged to transmit torque to the propeller shaft as opposed to the engine pulley. This issue can be resolved using different hardware, however changing the mounting hardware would require additional modifications to the associated component which time would not permit. Though battery technology still proves to be the main constraint of electrified powertrain, careful design and mission planning can help minimize the weight penalties incurred. The mission planning code complements previous research by comparing the weight penalties of a blended climb versus an engine only climb and selecting the lightest option. Though reverse thrust engine starting proved unsuccessful, the success of geared engine starting now allows the engine to be shut off during loiter reducing both acoustic profile and fuel consumption during loiter.

  16. Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM

    NASA Astrophysics Data System (ADS)

    Sheng, Hanlin; Zhang, Tianhong

    2017-08-01

    In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.

  17. Microseismic reverse time migration with a multi-cross-correlation staining algorithm for fracture imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Congcong; Jia, Xiaofeng; Liu, Shishuo; Zhang, Jie

    2018-02-01

    Accurate characterization of hydraulic fracturing zones is currently becoming increasingly important in production optimization, since hydraulic fracturing may increase the porosity and permeability of the reservoir significantly. Recently, the feasibility of the reverse time migration (RTM) method has been studied for the application in imaging fractures during borehole microseismic monitoring. However, strong low-frequency migration noise, poorly illuminated areas, and the low signal to noise ratio (SNR) data can degrade the imaging results. To improve the quality of the images, we propose a multi-cross-correlation staining algorithm to incorporate into the microseismic reverse time migration for imaging fractures using scattered data. Under the modified RTM method, our results are revealed in two images: one is the improved RTM image using the multi-cross-correlation condition, and the other is an image of the target region using the generalized staining algorithm. The numerical examples show that, compared with the conventional RTM, our method can significantly improve the spatial resolution of images, especially for the image of target region.

  18. Towards the Engineering of Dependable P2P-Based Network Control — The Case of Timely Routing Control Messages

    NASA Astrophysics Data System (ADS)

    Tutschku, Kurt; Nakao, Akihiro

    This paper introduces a methodology for engineering best-effort P2P algorithms into dependable P2P-based network control mechanism. The proposed method is built upon an iterative approach consisting of improving the original P2P algorithm by appropriate mechanisms and of thorough performance assessment with respect to dependability measures. The potential of the methodology is outlined by the example of timely routing control for vertical handover in B3G wireless networks. In detail, the well-known Pastry and CAN algorithms are enhanced to include locality. By showing how to combine algorithmic enhancements with performance indicators, this case study paves the way for future engineering of dependable network control mechanisms through P2P algorithms.

  19. Reversible Quantum Brownian Heat Engines for Electrons

    NASA Astrophysics Data System (ADS)

    Humphrey, T. E.; Newbury, R.; Taylor, R. P.; Linke, H.

    2002-08-01

    Brownian heat engines use local temperature gradients in asymmetric potentials to move particles against an external force. The energy efficiency of such machines is generally limited by irreversible heat flow carried by particles that make contact with different heat baths. Here we show that, by using a suitably chosen energy filter, electrons can be transferred reversibly between reservoirs that have different temperatures and electrochemical potentials. We apply this result to propose heat engines based on mesoscopic semiconductor ratchets, which can quasistatically operate arbitrarily close to Carnot efficiency.

  20. Reversible quantum heat engines for electrons

    NASA Astrophysics Data System (ADS)

    Linke, Heiner; Humphrey, Tammy E.; Newbury, Richard; Taylor, Richard P.

    2002-03-01

    Brownian heat engines use local temperature gradients in asymmetric potentials to move particles against an external force. The energy efficiency of such machines is generally limited by irreversible heat flow carried by particles that make contact with different heat baths. Here we show that, by using a suitably chosen energy filter, electrons can be transferred reversibly between reservoirs that have different temperatures and electrochemical potentials. We apply this result to propose heat engines based on quantum ratchets, which can quasistatically operate at Carnot efficiency.

  1. Tissue-Engineered Skeletal Muscle Organoids for Reversible Gene Therapy

    NASA Technical Reports Server (NTRS)

    Vandenburgh, Herman; DelTatto, Michael; Shansky, Janet; Lemaire, Julie; Chang, Albert; Payumo, Francis; Lee, Peter; Goodyear, Amy; Raven, Latasha

    1996-01-01

    Genetically modified murine skeletal myoblasts were tissue engineered in vitro into organ-like structures (organoids) containing only postmitotic myoribers secreting pharmacological levels of recombinant human growth hormone (rhGH). Subcutaneous organoid implantation under tension led to the rapid and stable appearance of physiological sera levels of rhGH for up to 12 weeks, whereas surgical removal led to its rapid disappearance. Reversible delivery of bioactive compounds from postmitotic cells in tissue engineered organs has several advantages over other forms of muscle gene therapy.

  2. Tissue-Engineered Skeletal Muscle Organoids for Reversible Gene Therapy

    NASA Technical Reports Server (NTRS)

    Vandenburgh, Herman; DelTatto, Michael; Shansky, Janet; Lemaire, Julie; Chang, Albert; Payumo, Francis; Lee, Peter; Goodyear, Amy; Raven, Latasha

    1996-01-01

    Genetically modified murine skeletal myoblasts were tissue engineered in vitro into organ-like structures (organoids) containing only postmitotic myofibers secreting pharmacological levels of recombinant human growth hormone (rhGH). Subcutaneous organoid Implantation under tension led to the rapid and stable appearance of physiological sera levels of rhGH for up to 12 weeks, whereas surgical removal led to its rapid disappearance. Reversible delivery of bioactive compounds from postimtotic cells in tissue engineered organs has several advantages over other forms of muscle gene therapy.

  3. REAR DETAIL OF RIGHT ENGINE AND WING. THRUST REVERSER REMAINS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    REAR DETAIL OF RIGHT ENGINE AND WING. THRUST REVERSER REMAINS OPEN. MECHANICS JONI BAINE (R) AND BILL THEODORE(L) OPEN FLAP CARRIAGE ACCESS WITH AN IMPACT GUN. THEY WILL CHECK TRANSMISSION FLUID AND OIL THE JACK SCREW. AT FAR LEFT UTILITY MECHANICS BEGIN BODY POLISHING. - Greater Buffalo International Airport, Maintenance Hangar, Buffalo, Erie County, NY

  4. The Use of Reverse Engineering to Analyse Student Computer Programs.

    ERIC Educational Resources Information Center

    Vanneste, Philip; And Others

    1996-01-01

    Discusses how the reverse engineering approach can generate feedback on computer programs without the user having any prior knowledge of what the program was designed to do. This approach uses the cognitive model of programming knowledge to interpret both context independent and dependent errors in the same words and concepts as human programmers.…

  5. An Application of Reverse Engineering to Automatic Item Generation: A Proof of Concept Using Automatically Generated Figures

    ERIC Educational Resources Information Center

    Lorié, William A.

    2013-01-01

    A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…

  6. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Over-the-wing model thrust reverser noise tests

    NASA Technical Reports Server (NTRS)

    Goodykoontz, J.; Gutierrez, O.

    1977-01-01

    Static acoustic tests were conducted on a 1/12 scale model over-the-wing target type thrust reverser. The model configuration simulates a design that is applicable to the over-the-wing short-haul advanced technology engine. Aerodynamic screening tests of a variety of reverser designs identified configurations that satisfied a reverse thrust requirement of 35 percent of forward thrust at a nozzle pressure ratio of 1.29. The variations in the reverser configuration included, blocker door angle, blocker door lip angle and shape, and side skirt shape. Acoustic data are presented and compared for the various configurations. The model data scaled to a single full size engine show that peak free field perceived noise (PN) levels at a 152.4 meter sideline distance range from 98 to 104 PNdb.

  8. A novel gene network inference algorithm using predictive minimum description length approach.

    PubMed

    Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang

    2010-05-28

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the PMDL principle is effective in determining the MI threshold and the developed algorithm improves precision of gene regulatory network inference. Based on the sensitivity analysis of all tested cases, an optimal CMI threshold value has been identified. Finally it was observed that the performance of the algorithms saturates at a certain threshold of data size.

  9. [Evaluation of three methods for constructing craniofacial mid-sagittal plane based on the cone beam computed tomography].

    PubMed

    Wang, S W; Li, M; Yang, H F; Zhao, Y J; Wang, Y; Liu, Y

    2016-04-18

    To compare the accuracyof interactive closet point (ICP) algorithm, Procrustes analysis (PA) algorithm,and a landmark-independent method to construct the mid-sagittal plane (MSP) of the cone beam computed tomography.To provide theoretical basis for establishing coordinate systemof CBCT images and symmetric analysis. Ten patients were selected and scanned by CBCT before orthodontic treatment.The scan data was imported into Mimics 10.0 to reconstructthree dimensional skulls.And the MSP of each skull was generated by ICP algorithm, PA algorithm and landmark-independent method. MSP extracted by ICP algorithm or PA algorithm involvedthree steps. First, the 3D skull processing was performed by reverse engineering software geomagic studio 2012 to obtain the mirror skull. Then, the original and its mirror skull was registered separately by ICP algorithm in geomagic studio 2012 and PA algorithm in NX Imageware 11.0. Finally, the registered data were united into new data to calculate the MSP of the originaldata in geomagic studio 2012. The mid-sagittal plane was determined by SELLA (S), nasion (N), basion (Ba) as traditional landmark-dependent methodconducted in software InVivoDental 5.0. The distance from 9 pairs of symmetric anatomical marked points to three sagittal plane were measured and calculated to compare the differences of the absolute value. The one-way ANOVA test was used to analyze the variable differences among the 3 MSPs. The pairwise comparison was performed with LSD method. MSPs calculated by the three methods were available for clinic analysis, which could be concluded from the front view.However, there was significant differences among the distances from the 9 pairs of symmetric anatomical marked points to the MSPs (F=10.932,P=0.001).LSD test showed there was no significant difference between the ICP algorithm and landmark-independent method (P=0.11), while there was significant difference between the PA algorithm and landmark-independent methods (P=0.01) . Mid-sagittal plane of 3D skulls could be generated base on ICP algorithm or PA algorithm. There was no significant difference between the ICP algorithm and landmark-independent method. For the subjects with no evident asymmetry, ICP algorithm is feasible in clinical analysis.

  10. A general heuristic for genome rearrangement problems.

    PubMed

    Dias, Ulisses; Galvão, Gustavo Rodrigues; Lintzmayer, Carla Négri; Dias, Zanoni

    2014-06-01

    In this paper, we present a general heuristic for several problems in the genome rearrangement field. Our heuristic does not solve any problem directly, it is rather used to improve the solutions provided by any non-optimal algorithm that solve them. Therefore, we have implemented several algorithms described in the literature and several algorithms developed by ourselves. As a whole, we implemented 23 algorithms for 9 well known problems in the genome rearrangement field. A total of 13 algorithms were implemented for problems that use the notions of prefix and suffix operations. In addition, we worked on 5 algorithms for the classic problem of sorting by transposition and we conclude the experiments by presenting results for 3 approximation algorithms for the sorting by reversals and transpositions problem and 2 approximation algorithms for the sorting by reversals problem. Another algorithm with better approximation ratio can be found for the last genome rearrangement problem, but it is purely theoretical with no practical implementation. The algorithms we implemented in addition to our heuristic lead to the best practical results in each case. In particular, we were able to improve results on the sorting by transpositions problem, which is a very special case because many efforts have been made to generate algorithms with good results in practice and some of these algorithms provide results that equal the optimum solutions in many cases. Our source codes and benchmarks are freely available upon request from the authors so that it will be easier to compare new approaches against our results.

  11. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  12. 78 FR 26818 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-08

    ... modifiers available to algorithms used by Floor brokers to route interest to the Exchange's matching engine...-Quotes entered into the matching engine by an algorithm on behalf of a Floor broker. STP modifiers would... algorithms removes impediments to and perfects the mechanism of a free and open market because there is a...

  13. A Comparison of Hybrid Approaches for Turbofan Engine Gas Path Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Lu, Feng; Wang, Yafan; Huang, Jinquan; Wang, Qihang

    2016-09-01

    A hybrid diagnostic method utilizing Extended Kalman Filter (EKF) and Adaptive Genetic Algorithm (AGA) is presented for performance degradation estimation and sensor anomaly detection of turbofan engine. The EKF is used to estimate engine component performance degradation for gas path fault diagnosis. The AGA is introduced in the integrated architecture and applied for sensor bias detection. The contributions of this work are the comparisons of Kalman Filters (KF)-AGA algorithms and Neural Networks (NN)-AGA algorithms with a unified framework for gas path fault diagnosis. The NN needs to be trained off-line with a large number of prior fault mode data. When new fault mode occurs, estimation accuracy by the NN evidently decreases. However, the application of the Linearized Kalman Filter (LKF) and EKF will not be restricted in such case. The crossover factor and the mutation factor are adapted to the fitness function at each generation in the AGA, and it consumes less time to search for the optimal sensor bias value compared to the Genetic Algorithm (GA). In a word, we conclude that the hybrid EKF-AGA algorithm is the best choice for gas path fault diagnosis of turbofan engine among the algorithms discussed.

  14. ELF: An Extended-Lagrangian Free Energy Calculation Module for Multiple Molecular Dynamics Engines.

    PubMed

    Chen, Haochuan; Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng

    2018-06-18

    Extended adaptive biasing force (eABF), a collective variable (CV)-based importance-sampling algorithm, has proven to be very robust and efficient compared with the original ABF algorithm. Its implementation in Colvars, a software addition to molecular dynamics (MD) engines, is, however, currently limited to NAMD and LAMMPS. To broaden the scope of eABF and its variants, like its generalized form (egABF), and make them available to other MD engines, e.g., GROMACS, AMBER, CP2K, and openMM, we present a PLUMED-based implementation, called extended-Lagrangian free energy calculation (ELF). This implementation can be used as a stand-alone gradient estimator for other CV-based sampling algorithms, such as temperature-accelerated MD (TAMD) and extended-Lagrangian metadynamics (MtD). ELF provides the end user with a convenient framework to help select the best-suited importance-sampling algorithm for a given application without any commitment to a particular MD engine.

  15. Evaluating a common semi-mechanistic mathematical model of gene-regulatory networks

    PubMed Central

    2015-01-01

    Modeling and simulation of gene-regulatory networks (GRNs) has become an important aspect of modern systems biology investigations into mechanisms underlying gene regulation. A key challenge in this area is the automated inference (reverse-engineering) of dynamic, mechanistic GRN models from gene expression time-course data. Common mathematical formalisms for representing such models capture two aspects simultaneously within a single parameter: (1) Whether or not a gene is regulated, and if so, the type of regulator (activator or repressor), and (2) the strength of influence of the regulator (if any) on the target or effector gene. To accommodate both roles, "generous" boundaries or limits for possible values of this parameter are commonly allowed in the reverse-engineering process. This approach has several important drawbacks. First, in the absence of good guidelines, there is no consensus on what limits are reasonable. Second, because the limits may vary greatly among different reverse-engineering experiments, the concrete values obtained for the models may differ considerably, and thus it is difficult to compare models. Third, if high values are chosen as limits, the search space of the model inference process becomes very large, adding unnecessary computational load to the already complex reverse-engineering process. In this study, we demonstrate that restricting the limits to the [−1, +1] interval is sufficient to represent the essential features of GRN systems and offers a reduction of the search space without loss of quality in the resulting models. To show this, we have carried out reverse-engineering studies on data generated from artificial and experimentally determined from real GRN systems. PMID:26356485

  16. Cranioplasty prosthesis manufacturing based on reverse engineering technology

    PubMed Central

    Chrzan, Robert; Urbanik, Andrzej; Karbowski, Krzysztof; Moskała, Marek; Polak, Jarosław; Pyrich, Marek

    2012-01-01

    Summary Background Most patients with large focal skull bone loss after craniectomy are referred for cranioplasty. Reverse engineering is a technology which creates a computer-aided design (CAD) model of a real structure. Rapid prototyping is a technology which produces physical objects from virtual CAD models. The aim of this study was to assess the clinical usefulness of these technologies in cranioplasty prosthesis manufacturing. Material/Methods CT was performed on 19 patients with focal skull bone loss after craniectomy, using a dedicated protocol. A material model of skull deficit was produced using computer numerical control (CNC) milling, and individually pre-operatively adjusted polypropylene-polyester prosthesis was prepared. In a control group of 20 patients a prosthesis was manually adjusted to each patient by a neurosurgeon during surgery, without using CT-based reverse engineering/rapid prototyping. In each case, the prosthesis was implanted into the patient. The mean operating times in both groups were compared. Results In the group of patients with reverse engineering/rapid prototyping-based cranioplasty, the mean operating time was shorter (120.3 min) compared to that in the control group (136.5 min). The neurosurgeons found the new technology particularly useful in more complicated bone deficits with different curvatures in various planes. Conclusions Reverse engineering and rapid prototyping may reduce the time needed for cranioplasty neurosurgery and improve the prosthesis fitting. Such technologies may utilize data obtained by commonly used spiral CT scanners. The manufacturing of individually adjusted prostheses should be commonly used in patients planned for cranioplasty with synthetic material. PMID:22207125

  17. In vitro assessment of the contact mechanics of reverse-engineered distal humeral hemiarthroplasty prostheses.

    PubMed

    Willing, Ryan; Lapner, Michael; King, Graham J W; Johnson, James A

    2014-11-01

    Distal humeral hemiarthroplasty alters cartilage contact mechanics, which may predispose to osteoarthritis. Current prostheses do not replicate the native anatomy, and therefore contribute to these changes. We hypothesized that prostheses reverse-engineered from the native bone shape would provide similar contact patterns as the native articulation. Reverse-engineered hemiarthroplasty prostheses were manufactured for five cadaveric elbows based on CT images of the distal humerus. Passive flexion trials with constant muscle forces were performed with the native articulation intact while bone motions were recorded using a motion tracking system. Motion trials were then repeated after the distal humerus was replaced with a corresponding reverse-engineered prosthesis. Contact areas and patterns were reconstructed using computer models created from CT scan images combined with the motion tracker data. The total contact areas, as well as the contact area within smaller sub-regions of the ulna and radius, were analyzed for changes resulting from hemiarthroplasty using repeated-measures ANOVAs. Contact area at the ulna and radius decreased on average 42% (SD 19%, P=.008) and 41% (SD 42%, P=.096), respectively. Contact area decreases were not uniform throughout the different sub-regions, suggesting that contact patterns were also altered. Reverse-engineered prostheses did not reproduce the same contact pattern as the native joints, possibly because the thickness of the distal humerus cartilage layer was neglected when generating the prosthesis shapes or as a consequence of the increased stiffness of the metallic implants. Alternative design strategies and materials for hemiarthroplasty should be considered in future work. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. New Algorithm and Software (BNOmics) for Inferring and Visualizing Bayesian Networks from Heterogeneous Big Biological and Genetic Data

    PubMed Central

    Gogoshin, Grigoriy; Boerwinkle, Eric

    2017-01-01

    Abstract Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology—type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types—single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc. PMID:27681505

  19. Privacy-Preserving Patient Similarity Learning in a Federated Environment: Development and Analysis.

    PubMed

    Lee, Junghye; Sun, Jimeng; Wang, Fei; Wang, Shuang; Jun, Chi-Hyuck; Jiang, Xiaoqian

    2018-04-13

    There is an urgent need for the development of global analytic frameworks that can perform analyses in a privacy-preserving federated environment across multiple institutions without privacy leakage. A few studies on the topic of federated medical analysis have been conducted recently with the focus on several algorithms. However, none of them have solved similar patient matching, which is useful for applications such as cohort construction for cross-institution observational studies, disease surveillance, and clinical trials recruitment. The aim of this study was to present a privacy-preserving platform in a federated setting for patient similarity learning across institutions. Without sharing patient-level information, our model can find similar patients from one hospital to another. We proposed a federated patient hashing framework and developed a novel algorithm to learn context-specific hash codes to represent patients across institutions. The similarities between patients can be efficiently computed using the resulting hash codes of corresponding patients. To avoid security attack from reverse engineering on the model, we applied homomorphic encryption to patient similarity search in a federated setting. We used sequential medical events extracted from the Multiparameter Intelligent Monitoring in Intensive Care-III database to evaluate the proposed algorithm in predicting the incidence of five diseases independently. Our algorithm achieved averaged area under the curves of 0.9154 and 0.8012 with balanced and imbalanced data, respectively, in κ-nearest neighbor with κ=3. We also confirmed privacy preservation in similarity search by using homomorphic encryption. The proposed algorithm can help search similar patients across institutions effectively to support federated data analysis in a privacy-preserving manner. ©Junghye Lee, Jimeng Sun, Fei Wang, Shuang Wang, Chi-Hyuck Jun, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.

  20. New Algorithm and Software (BNOmics) for Inferring and Visualizing Bayesian Networks from Heterogeneous Big Biological and Genetic Data.

    PubMed

    Gogoshin, Grigoriy; Boerwinkle, Eric; Rodin, Andrei S

    2017-04-01

    Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology-type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types-single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc.

  1. Voxel size dependency, reproducibility and sensitivity of an in vivo bone loading estimation algorithm

    PubMed Central

    Christen, Patrik; Schulte, Friederike A.; Zwahlen, Alexander; van Rietbergen, Bert; Boutroy, Stephanie; Melton, L. Joseph; Amin, Shreyasee; Khosla, Sundeep; Goldhahn, Jörg; Müller, Ralph

    2016-01-01

    A bone loading estimation algorithm was previously developed that provides in vivo loading conditions required for in vivo bone remodelling simulations. The algorithm derives a bone's loading history from its microstructure as assessed by high-resolution (HR) computed tomography (CT). This reverse engineering approach showed accurate and realistic results based on micro-CT and HR-peripheral quantitative CT images. However, its voxel size dependency, reproducibility and sensitivity still need to be investigated, which is the purpose of this study. Voxel size dependency was tested on cadaveric distal radii with micro-CT images scanned at 25 µm and downscaled to 50, 61, 75, 82, 100, 125 and 150 µm. Reproducibility was calculated with repeated in vitro as well as in vivo HR-pQCT measurements at 82 µm. Sensitivity was defined using HR-pQCT images from women with fracture versus non-fracture, and low versus high bone volume fraction, expecting similar and different loading histories, respectively. Our results indicate that the algorithm is voxel size independent within an average (maximum) error of 8.2% (32.9%) at 61 µm, but that the dependency increases considerably at voxel sizes bigger than 82 µm. In vitro and in vivo reproducibility are up to 4.5% and 10.2%, respectively, which is comparable to other in vitro studies and slightly higher than in other in vivo studies. Subjects with different bone volume fraction were clearly distinguished but not subjects with and without fracture. This is in agreement with bone adapting to customary loading but not to fall loads. We conclude that the in vivo bone loading estimation algorithm provides reproducible, sensitive and fairly voxel size independent results at up to 82 µm, but that smaller voxel sizes would be advantageous. PMID:26790999

  2. Privacy-Preserving Patient Similarity Learning in a Federated Environment: Development and Analysis

    PubMed Central

    Sun, Jimeng; Wang, Fei; Wang, Shuang; Jun, Chi-Hyuck; Jiang, Xiaoqian

    2018-01-01

    Background There is an urgent need for the development of global analytic frameworks that can perform analyses in a privacy-preserving federated environment across multiple institutions without privacy leakage. A few studies on the topic of federated medical analysis have been conducted recently with the focus on several algorithms. However, none of them have solved similar patient matching, which is useful for applications such as cohort construction for cross-institution observational studies, disease surveillance, and clinical trials recruitment. Objective The aim of this study was to present a privacy-preserving platform in a federated setting for patient similarity learning across institutions. Without sharing patient-level information, our model can find similar patients from one hospital to another. Methods We proposed a federated patient hashing framework and developed a novel algorithm to learn context-specific hash codes to represent patients across institutions. The similarities between patients can be efficiently computed using the resulting hash codes of corresponding patients. To avoid security attack from reverse engineering on the model, we applied homomorphic encryption to patient similarity search in a federated setting. Results We used sequential medical events extracted from the Multiparameter Intelligent Monitoring in Intensive Care-III database to evaluate the proposed algorithm in predicting the incidence of five diseases independently. Our algorithm achieved averaged area under the curves of 0.9154 and 0.8012 with balanced and imbalanced data, respectively, in κ-nearest neighbor with κ=3. We also confirmed privacy preservation in similarity search by using homomorphic encryption. Conclusions The proposed algorithm can help search similar patients across institutions effectively to support federated data analysis in a privacy-preserving manner. PMID:29653917

  3. Transcriptional network inference from functional similarity and expression data: a global supervised approach.

    PubMed

    Ambroise, Jérôme; Robert, Annie; Macq, Benoit; Gala, Jean-Luc

    2012-01-06

    An important challenge in system biology is the inference of biological networks from postgenomic data. Among these biological networks, a gene transcriptional regulatory network focuses on interactions existing between transcription factors (TFs) and and their corresponding target genes. A large number of reverse engineering algorithms were proposed to infer such networks from gene expression profiles, but most current methods have relatively low predictive performances. In this paper, we introduce the novel TNIFSED method (Transcriptional Network Inference from Functional Similarity and Expression Data), that infers a transcriptional network from the integration of correlations and partial correlations of gene expression profiles and gene functional similarities through a supervised classifier. In the current work, TNIFSED was applied to predict the transcriptional network in Escherichia coli and in Saccharomyces cerevisiae, using datasets of 445 and 170 affymetrix arrays, respectively. Using the area under the curve of the receiver operating characteristics and the F-measure as indicators, we showed the predictive performance of TNIFSED to be better than unsupervised state-of-the-art methods. TNIFSED performed slightly worse than the supervised SIRENE algorithm for the target genes identification of the TF having a wide range of yet identified target genes but better for TF having only few identified target genes. Our results indicate that TNIFSED is complementary to the SIRENE algorithm, and particularly suitable to discover target genes of "orphan" TFs.

  4. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  5. Reversible and irreversible heat engine and refrigerator cycles

    NASA Astrophysics Data System (ADS)

    Leff, Harvey S.

    2018-05-01

    Although no reversible thermodynamic cycles exist in nature, nearly all cycles covered in textbooks are reversible. This is a review, clarification, and extension of results and concepts for quasistatic, reversible and irreversible processes and cycles, intended primarily for teachers and students. Distinctions between the latter process types are explained, with emphasis on clockwise (CW) and counterclockwise (CCW) cycles. Specific examples of each are examined, including Carnot, Kelvin and Stirling cycles. For the Stirling cycle, potentially useful task-specific efficiency measures are proposed and illustrated. Whether a cycle behaves as a traditional refrigerator or heat engine can depend on whether it is reversible or irreversible. Reversible and irreversible-quasistatic CW cycles both satisfy Carnot's inequality for thermal efficiency, η ≤ η C a r n o t . Irreversible CCW cycles with two reservoirs satisfy the coefficient of performance inequality K ≤ K C a r n o t . However, an arbitrary reversible cycle satisfies K ≥ K C a r n o t when compared with a reversible Carnot cycle operating between its maximum and minimum temperatures, a potentially counterintuitive result.

  6. The ATPG Attack for Reverse Engineering of Combinational Hybrid Custom-Programmable Circuits

    DTIC Science & Technology

    2017-03-23

    The ATPG Attack for Reverse Engineering of Combinational Hybrid Custom- Programmable Circuits Raza Shafiq Hamid Mahmoodi Houman Homayoun Hassan... programmable circuits. While functionality of programmable cells are only known to trusted parties, effective techniques for activation and propagation...of the cells are introduced. The ATPG attack carefully studies dependency of programmable cells to develop their (partial) truth tables. Results

  7. System engineering approach to GPM retrieval algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, C. R.; Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Groundmore » validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do calculated at each bin, the rain rate can then be calculated based on a suitable rain-rate model. This paper develops a system engineering interface to the retrieval algorithms while remaining cognizant of system engineering issues so that it can be used to bridge the divide between algorithm physics an d overall mission requirements. Additionally, in line with the systems approach, a methodology is developed such that the measurement requirements pass through the retrieval model and other subsystems and manifest themselves as measurement and other system constraints. A systems model has been developed for the retrieval algorithm that can be evaluated through system-analysis tools such as MATLAB/Simulink.« less

  8. Event-driven management algorithm of an Engineering documents circulation system

    NASA Astrophysics Data System (ADS)

    Kuzenkov, V.; Zebzeev, A.; Gromakov, E.

    2015-04-01

    Development methodology of an engineering documents circulation system in the design company is reviewed. Discrete event-driven automatic models using description algorithms of project management is offered. Petri net use for dynamic design of projects is offered.

  9. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  10. Engineered Intrinsic Bioremediation of Ammonium Perchlorate in Groundwater

    DTIC Science & Technology

    2010-12-01

    German Collection of Microorganisms and Cell Cultures) GA Genetic Algorithms GA-ANN Genetic Algorithm Artificial Neural Network GMO genetically...for in situ treatment of perchlorate in groundwater. This is accomplished without the addition of genetically engineered microorganisms ( GMOs ) to the...perchlorate, even in the presence of oxygen and without the addition of genetically engineered microorganisms ( GMOs ) to the environment. This approach

  11. Analyzing gene perturbation screens with nested effects models in R and bioconductor.

    PubMed

    Fröhlich, Holger; Beissbarth, Tim; Tresch, Achim; Kostka, Dennis; Jacob, Juby; Spang, Rainer; Markowetz, F

    2008-11-01

    Nested effects models (NEMs) are a class of probabilistic models introduced to analyze the effects of gene perturbation screens visible in high-dimensional phenotypes like microarrays or cell morphology. NEMs reverse engineer upstream/downstream relations of cellular signaling cascades. NEMs take as input a set of candidate pathway genes and phenotypic profiles of perturbing these genes. NEMs return a pathway structure explaining the observed perturbation effects. Here, we describe the package nem, an open-source software to efficiently infer NEMs from data. Our software implements several search algorithms for model fitting and is applicable to a wide range of different data types and representations. The methods we present summarize the current state-of-the-art in NEMs. Our software is written in the R language and freely avail-able via the Bioconductor project at http://www.bioconductor.org.

  12. Predicting human olfactory perception from chemical features of odor molecules.

    PubMed

    Keller, Andreas; Gerkin, Richard C; Guan, Yuanfang; Dhurandhar, Amit; Turu, Gabor; Szalai, Bence; Mainland, Joel D; Ihara, Yusuke; Yu, Chung Wen; Wolfinger, Russ; Vens, Celine; Schietgat, Leander; De Grave, Kurt; Norel, Raquel; Stolovitzky, Gustavo; Cecchi, Guillermo A; Vosshall, Leslie B; Meyer, Pablo

    2017-02-24

    It is still not possible to predict whether a given molecule will have a perceived odor or what olfactory percept it will produce. We therefore organized the crowd-sourced DREAM Olfaction Prediction Challenge. Using a large olfactory psychophysical data set, teams developed machine-learning algorithms to predict sensory attributes of molecules based on their chemoinformatic features. The resulting models accurately predicted odor intensity and pleasantness and also successfully predicted 8 among 19 rated semantic descriptors ("garlic," "fish," "sweet," "fruit," "burnt," "spices," "flower," and "sour"). Regularized linear models performed nearly as well as random forest-based ones, with a predictive accuracy that closely approaches a key theoretical limit. These models help to predict the perceptual qualities of virtually any molecule with high accuracy and also reverse-engineer the smell of a molecule. Copyright © 2017, American Association for the Advancement of Science.

  13. Turbofan engine demonstration of sensor failure detection

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood

    1991-01-01

    In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.

  14. Reverse engineering physical models employing a sensor integration between 3D stereo detection and contact digitization

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Lin, Grier C. I.

    1997-12-01

    A vision-drive automatic digitization process for free-form surface reconstruction has been developed, with a coordinate measurement machine (CMM) equipped with a touch-triggered probe and a CCD camera, in reverse engineering physical models. The process integrates 3D stereo detection, data filtering, Delaunay triangulation, adaptive surface digitization into a single process of surface reconstruction. By using this innovative approach, surface reconstruction can be implemented automatically and accurately. Least-squares B- spline surface models with the controlled accuracy of digitization can be generated for further application in product design and manufacturing processes. One industrial application indicates that this approach is feasible, and the processing time required in reverse engineering process can be significantly reduced up to more than 85%.

  15. Mobile Timekeeping Application Built on Reverse-Engineered JPL Infrastructure

    NASA Technical Reports Server (NTRS)

    Witoff, Robert J.

    2013-01-01

    Every year, non-exempt employees cumulatively waste over one man-year tracking their time and using the timekeeping Web page to save those times. This app eliminates this waste. The innovation is a native iPhone app. Libraries were built around a reverse- engineered JPL API. It represents a punch-in/punch-out paradigm for timekeeping. It is accessible natively via iPhones, and features ease of access. Any non-exempt employee can natively punch in and out, as well as save and view their JPL timecard. This app is built on custom libraries created by reverse-engineering the standard timekeeping application. Communication is through custom libraries that re-route traffic through BrowserRAS (remote access service). This has value at any center where employees track their time.

  16. Lifted worm algorithm for the Ising model

    NASA Astrophysics Data System (ADS)

    Elçi, Eren Metin; Grimm, Jens; Ding, Lijie; Nasrawi, Abrahim; Garoni, Timothy M.; Deng, Youjin

    2018-04-01

    We design an irreversible worm algorithm for the zero-field ferromagnetic Ising model by using the lifting technique. We study the dynamic critical behavior of an energylike observable on both the complete graph and toroidal grids, and compare our findings with reversible algorithms such as the Prokof'ev-Svistunov worm algorithm. Our results show that the lifted worm algorithm improves the dynamic exponent of the energylike observable on the complete graph and leads to a significant constant improvement on toroidal grids.

  17. An algorithm on simultaneous optimization of performance and mass parameters of open-cycle liquid-propellant engine of launch vehicles

    NASA Astrophysics Data System (ADS)

    Eskandari, M. A.; Mazraeshahi, H. K.; Ramesh, D.; Montazer, E.; Salami, E.; Romli, F. I.

    2017-12-01

    In this paper, a new method for the determination of optimum parameters of open-cycle liquid-propellant engine of launch vehicles is introduced. The parameters affecting the objective function, which is the ratio of specific impulse to gross mass of the launch vehicle, are chosen to achieve maximum specific impulse as well as minimum mass for the structure of engine, tanks, etc. The proposed algorithm uses constant integration of thrust with respect to time for launch vehicle with specific diameter and length to calculate the optimum working condition. The results by this novel algorithm are compared to those obtained from using Genetic Algorithm method and they are also validated against the results of existing launch vehicle.

  18. Minimal algorithm for running an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Stoica, V.; Borborean, A.; Ciocan, A.; Manciu, C.

    2018-01-01

    The internal combustion engine control is a well-known topic within automotive industry and is widely used. However, in research laboratories and universities the use of a control system trading is not the best solution because of predetermined operating algorithms, and calibrations (accessible only by the manufacturer) without allowing massive intervention from outside. Laboratory solutions on the market are very expensive. Consequently, in the paper we present a minimal algorithm required to start-up and run an internal combustion engine. The presented solution can be adapted to function on performance microcontrollers available on the market at the present time and at an affordable price. The presented algorithm was implemented in LabView and runs on a CompactRIO hardware platform.

  19. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    NASA Astrophysics Data System (ADS)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  20. Comments on "Failures in detecting volcanic ash from a satellite-based technique"

    USGS Publications Warehouse

    Prata, F.; Bluth, G.; Rose, B.; Schneider, D.; Tupper, A.

    2001-01-01

    The recent paper by Simpson et al. [Remote Sens. Environ. 72 (2000) 191.] on failures to detect volcanic ash using the 'reverse' absorption technique provides a timely reminder of the danger that volcanic ash presents to aviation and the urgent need for some form of effective remote detection. The paper unfortunately suffers from a fundamental flaw in its methodology and numerous errors of fact and interpretation. For the moment, the 'reverse' absorption technique provides the best means for discriminating volcanic ash clouds from meteorological clouds. The purpose of our comment is not to defend any particular algorithm; rather, we point out some problems with Simpson et al.'s analysis and re-state the conditions under which the 'reverse' absorption algorithm is likely to succeed. ?? 2001 Elsevier Science Inc. All rights reserved.

  1. Preliminary flight evaluation of an engine performance optimization algorithm

    NASA Technical Reports Server (NTRS)

    Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.

    1991-01-01

    A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.

  2. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology

    PubMed Central

    Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.

    2017-01-01

    Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442

  3. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology.

    PubMed

    Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R

    2017-01-01

    We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.

  4. Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.

    PubMed

    Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam

    2016-01-01

    We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.

  5. PREMER: a Tool to Infer Biological Networks.

    PubMed

    Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R

    2017-10-04

    Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).

  6. Manifold absolute pressure estimation using neural network with hybrid training algorithm

    PubMed Central

    Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli

    2017-01-01

    In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779

  7. Automation of reverse engineering process in aircraft modeling and related optimization problems

    NASA Technical Reports Server (NTRS)

    Li, W.; Swetits, J.

    1994-01-01

    During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for quadratic programming problems.

  8. Energy Efficient Engine program advanced turbofan nacelle definition study

    NASA Technical Reports Server (NTRS)

    Howe, David C.; Wynosky, T. A.

    1985-01-01

    Advanced, low drag, nacelle configurations were defined for some of the more promising propulsion systems identified in the earlier Benefit/Cost Study, to assess the benefits associated with these advanced technology nacelles and formulate programs for developing these nacelles and low volume thrust reversers/spoilers to a state of technology readiness in the early 1990's. The study results established the design feasibility of advanced technology, slim line nacelles applicable to advanced technology, high bypass ratio turbofan engines. Design feasibility was also established for two low volume thrust reverse/spoiler concepts that meet or exceed the required effectiveness for these engines. These nacelle and thrust reverse/spoiler designs were shown to be applicable in engines with takeoff thrust sizes ranging from 24,000 to 60,000 pounds. The reduced weight, drag, and cost of the advanced technology nacelle installations relative to current technology nacelles offer a mission fuel burn savings ranging from 3.0 to 4.5 percent and direct operating cost plus interest improvements from 1.6 to 2.2 percent.

  9. Neural mechanisms underlying sensitivity to reverse-phi motion in the fly

    PubMed Central

    Meier, Matthias; Serbe, Etienne; Eichner, Hubert; Borst, Alexander

    2017-01-01

    Optical illusions provide powerful tools for mapping the algorithms and circuits that underlie visual processing, revealing structure through atypical function. Of particular note in the study of motion detection has been the reverse-phi illusion. When contrast reversals accompany discrete movement, detected direction tends to invert. This occurs across a wide range of organisms, spanning humans and invertebrates. Here, we map an algorithmic account of the phenomenon onto neural circuitry in the fruit fly Drosophila melanogaster. Through targeted silencing experiments in tethered walking flies as well as electrophysiology and calcium imaging, we demonstrate that ON- or OFF-selective local motion detector cells T4 and T5 are sensitive to certain interactions between ON and OFF. A biologically plausible detector model accounts for subtle features of this particular form of illusory motion reversal, like the re-inversion of turning responses occurring at extreme stimulus velocities. In light of comparable circuit architecture in the mammalian retina, we suggest that similar mechanisms may apply even to human psychophysics. PMID:29261684

  10. Neural mechanisms underlying sensitivity to reverse-phi motion in the fly.

    PubMed

    Leonhardt, Aljoscha; Meier, Matthias; Serbe, Etienne; Eichner, Hubert; Borst, Alexander

    2017-01-01

    Optical illusions provide powerful tools for mapping the algorithms and circuits that underlie visual processing, revealing structure through atypical function. Of particular note in the study of motion detection has been the reverse-phi illusion. When contrast reversals accompany discrete movement, detected direction tends to invert. This occurs across a wide range of organisms, spanning humans and invertebrates. Here, we map an algorithmic account of the phenomenon onto neural circuitry in the fruit fly Drosophila melanogaster. Through targeted silencing experiments in tethered walking flies as well as electrophysiology and calcium imaging, we demonstrate that ON- or OFF-selective local motion detector cells T4 and T5 are sensitive to certain interactions between ON and OFF. A biologically plausible detector model accounts for subtle features of this particular form of illusory motion reversal, like the re-inversion of turning responses occurring at extreme stimulus velocities. In light of comparable circuit architecture in the mammalian retina, we suggest that similar mechanisms may apply even to human psychophysics.

  11. The whole number axis integer linear transformation reversible information hiding algorithm on wavelet domain

    NASA Astrophysics Data System (ADS)

    Jiang, Zhuo; Xie, Chengjun

    2013-12-01

    This paper improved the algorithm of reversible integer linear transform on finite interval [0,255], which can realize reversible integer linear transform in whole number axis shielding data LSB (least significant bit). Firstly, this method use integer wavelet transformation based on lifting scheme to transform the original image, and select the transformed high frequency areas as information hiding area, meanwhile transform the high frequency coefficients blocks in integer linear way and embed the secret information in LSB of each coefficient, then information hiding by embedding the opposite steps. To extract data bits and recover the host image, a similar reverse procedure can be conducted, and the original host image can be lossless recovered. The simulation experimental results show that this method has good secrecy and concealment, after conducted the CDF (m, n) and DD (m, n) series of wavelet transformed. This method can be applied to information security domain, such as medicine, law and military.

  12. CRISPR: a Versatile Tool for Both Forward and Reverse Genetics Research

    PubMed Central

    Gurumurthy, Channabasavaiah B.; Grati, M'hamed; Ohtsuka, Masato; Schilit, Samantha L.P.; Quadros, Rolen M.; Liu, Xue Zhong

    2016-01-01

    Human genetics research employs the two opposing approaches of forward and reverse genetics. While forward genetics identifies and links a mutation to an observed disease etiology, reverse genetics induces mutations in model organisms to study their role in disease. In most cases, causality for mutations identified by forward genetics is confirmed by reverse genetics through the development of genetically engineered animal models and an assessment of whether the model can recapitulate the disease. While many technological advances have helped improve these approaches, some gaps still remain. CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR-associated) system, which has emerged as a revolutionary genetic engineering tool, holds great promise for closing such gaps. By combining the benefits of forward and reverse genetics, it has dramatically expedited human genetics research. We provide a perspective on the power of CRISPR-based forward and reverse genetics tools in human genetics and discuss its applications using some disease examples. PMID:27384229

  13. Sorting signed permutations by short operations.

    PubMed

    Galvão, Gustavo Rodrigues; Lee, Orlando; Dias, Zanoni

    2015-01-01

    During evolution, global mutations may alter the order and the orientation of the genes in a genome. Such mutations are referred to as rearrangement events, or simply operations. In unichromosomal genomes, the most common operations are reversals, which are responsible for reversing the order and orientation of a sequence of genes, and transpositions, which are responsible for switching the location of two contiguous portions of a genome. The problem of computing the minimum sequence of operations that transforms one genome into another - which is equivalent to the problem of sorting a permutation into the identity permutation - is a well-studied problem that finds application in comparative genomics. There are a number of works concerning this problem in the literature, but they generally do not take into account the length of the operations (i.e. the number of genes affected by the operations). Since it has been observed that short operations are prevalent in the evolution of some species, algorithms that efficiently solve this problem in the special case of short operations are of interest. In this paper, we investigate the problem of sorting a signed permutation by short operations. More precisely, we study four flavors of this problem: (i) the problem of sorting a signed permutation by reversals of length at most 2; (ii) the problem of sorting a signed permutation by reversals of length at most 3; (iii) the problem of sorting a signed permutation by reversals and transpositions of length at most 2; and (iv) the problem of sorting a signed permutation by reversals and transpositions of length at most 3. We present polynomial-time solutions for problems (i) and (iii), a 5-approximation for problem (ii), and a 3-approximation for problem (iv). Moreover, we show that the expected approximation ratio of the 5-approximation algorithm is not greater than 3 for random signed permutations with more than 12 elements. Finally, we present experimental results that show that the approximation ratios of the approximation algorithms cannot be smaller than 3. In particular, this means that the approximation ratio of the 3-approximation algorithm is tight.

  14. Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1992-01-01

    An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.

  15. Algorithms for database-dependent search of MS/MS data.

    PubMed

    Matthiesen, Rune

    2013-01-01

    The frequent used bottom-up strategy for identification of proteins and their associated modifications generate nowadays typically thousands of MS/MS spectra that normally are matched automatically against a protein sequence database. Search engines that take as input MS/MS spectra and a protein sequence database are referred as database-dependent search engines. Many programs both commercial and freely available exist for database-dependent search of MS/MS spectra and most of the programs have excellent user documentation. The aim here is therefore to outline the algorithm strategy behind different search engines rather than providing software user manuals. The process of database-dependent search can be divided into search strategy, peptide scoring, protein scoring, and finally protein inference. Most efforts in the literature have been put in to comparing results from different software rather than discussing the underlining algorithms. Such practical comparisons can be cluttered by suboptimal implementation and the observed differences are frequently caused by software parameters settings which have not been set proper to allow even comparison. In other words an algorithmic idea can still be worth considering even if the software implementation has been demonstrated to be suboptimal. The aim in this chapter is therefore to split the algorithms for database-dependent searching of MS/MS data into the above steps so that the different algorithmic ideas become more transparent and comparable. Most search engines provide good implementations of the first three data analysis steps mentioned above, whereas the final step of protein inference are much less developed for most search engines and is in many cases performed by an external software. The final part of this chapter illustrates how protein inference is built into the VEMS search engine and discusses a stand-alone program SIR for protein inference that can import a Mascot search result.

  16. Gas flow calculation method of a ramjet engine

    NASA Astrophysics Data System (ADS)

    Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir

    2017-11-01

    At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.

  17. The results of a low-speed wind tunnel test to investigate the effects of the Refan JT8D engine target thrust reverser on the stability and control characteristics of the Boeing 727-200 airplane

    NASA Technical Reports Server (NTRS)

    Kupcis, E. A.

    1974-01-01

    The effects of the Refan JT8D side engine target thrust reverser on the stability and control characteristics of the Boeing 727-200 airplane were investigated using the Boeing-Vertol 20 x 20 ft Low-Speed Wind Tunnel. A powered model of the 727-200 was tested in groud effect in the landing configuration. The Refan target reverser configuration was evaluated relative to the basic production 727 airplane with its clamshell-deflector door thrust reverser design. The Refan configuration had slightly improved directional control characteristics relative to the basic airplane. Clocking the Refan thrust reversers 20 degrees outboard to direct the reverser flow away from the vertical tail, had little effect on directional control. However, clocking them 20 degrees inboard resulted in a complete loss of rudder effectiveness for speeds greater than 90 knots. Variations in Refan reverser lip/fence geometry had a minor effect on directional control.

  18. Active Engine Mount Technology for Automobiles

    NASA Technical Reports Server (NTRS)

    Rahman, Z.; Spanos, J.

    1996-01-01

    We present a narrow-band tracking control using a variant of the Least Mean Square (LMS) algorithm [1,2,3] for supressing automobile engine/drive-train vibration disturbances. The algorithm presented here has a simple structure and may be implemented in a low cost micro controller.

  19. Reverse engineering of integrated circuits

    DOEpatents

    Chisholm, Gregory H.; Eckmann, Steven T.; Lain, Christopher M.; Veroff, Robert L.

    2003-01-01

    Software and a method therein to analyze circuits. The software comprises several tools, each of which perform particular functions in the Reverse Engineering process. The analyst, through a standard interface, directs each tool to the portion of the task to which it is most well suited, rendering previously intractable problems solvable. The tools are generally used iteratively to produce a successively more abstract picture of a circuit, about which incomplete a priori knowledge exists.

  20. Predicting human immunodeficiency virus inhibitors using multi-dimensional Bayesian network classifiers.

    PubMed

    Borchani, Hanen; Bielza, Concha; Toro, Carlos; Larrañaga, Pedro

    2013-03-01

    Our aim is to use multi-dimensional Bayesian network classifiers in order to predict the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors given an input set of respective resistance mutations that an HIV patient carries. Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models especially designed to solve multi-dimensional classification problems, where each input instance in the data set has to be assigned simultaneously to multiple output class variables that are not necessarily binary. In this paper, we introduce a new method, named MB-MBC, for learning MBCs from data by determining the Markov blanket around each class variable using the HITON algorithm. Our method is applied to both reverse transcriptase and protease data sets obtained from the Stanford HIV-1 database. Regarding the prediction of antiretroviral combination therapies, the experimental study shows promising results in terms of classification accuracy compared with state-of-the-art MBC learning algorithms. For reverse transcriptase inhibitors, we get 71% and 11% in mean and global accuracy, respectively; while for protease inhibitors, we get more than 84% and 31% in mean and global accuracy, respectively. In addition, the analysis of MBC graphical structures lets us gain insight into both known and novel interactions between reverse transcriptase and protease inhibitors and their respective resistance mutations. MB-MBC algorithm is a valuable tool to analyze the HIV-1 reverse transcriptase and protease inhibitors prediction problem and to discover interactions within and between these two classes of inhibitors. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Electric Power Engineering Cost Predicting Model Based on the PCA-GA-BP

    NASA Astrophysics Data System (ADS)

    Wen, Lei; Yu, Jiake; Zhao, Xin

    2017-10-01

    In this paper a hybrid prediction algorithm: PCA-GA-BP model is proposed. PCA algorithm is established to reduce the correlation between indicators of original data and decrease difficulty of BP neural network in complex dimensional calculation. The BP neural network is established to estimate the cost of power transmission project. The results show that PCA-GA-BP algorithm can improve result of prediction of electric power engineering cost.

  2. The 4A Metric Algorithm: A Unique E-Learning Engineering Solution Designed via Neuroscience to Counter Cheating and Reduce Its Recidivism by Measuring Student Growth through Systemic Sequential Online Learning

    ERIC Educational Resources Information Center

    Osler, James Edward

    2016-01-01

    This paper provides a novel instructional methodology that is a unique E-Learning engineered "4A Metric Algorithm" designed to conceptually address the four main challenges faced by 21st century students, who are tempted to cheat in a myriad of higher education settings (face to face, hybrid, and online). The algorithmic online…

  3. The knowledge instinct, cognitive algorithms, modeling of language and cultural evolution

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.

    2008-04-01

    The talk discusses mechanisms of the mind and their engineering applications. The past attempts at designing "intelligent systems" encountered mathematical difficulties related to algorithmic complexity. The culprit turned out to be logic, which in one way or another was used not only in logic rule systems, but also in statistical, neural, and fuzzy systems. Algorithmic complexity is related to Godel's theory, a most fundamental mathematical result. These difficulties were overcome by replacing logic with a dynamic process "from vague to crisp," dynamic logic. It leads to algorithms overcoming combinatorial complexity, and resulting in orders of magnitude improvement in classical problems of detection, tracking, fusion, and prediction in noise. I present engineering applications to pattern recognition, detection, tracking, fusion, financial predictions, and Internet search engines. Mathematical and engineering efficiency of dynamic logic can also be understood as cognitive algorithm, which describes fundamental property of the mind, the knowledge instinct responsible for all our higher cognitive functions: concepts, perception, cognition, instincts, imaginations, intuitions, emotions, including emotions of the beautiful. I present our latest results in modeling evolution of languages and cultures, their interactions in these processes, and role of music in cultural evolution. Experimental data is presented that support the theory. Future directions are outlined.

  4. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  5. Validation of simultaneous reverse optimization reconstruction algorithm in a practical circular subaperture stitching interferometer

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Li, Dong; Liu, Yu; Liu, Jingxiao; Li, Jingsong; Yu, Benli

    2017-11-01

    We demonstrate the validity of the simultaneous reverse optimization reconstruction (SROR) algorithm in circular subaperture stitching interferometry (CSSI), which is previously proposed for non-null aspheric annular subaperture stitching interferometry (ASSI). The merits of the modified SROR algorithm in CSSI, such as auto retrace error correction, no need of overlap and even permission of missed coverage, are analyzed in detail in simulations and experiments. Meanwhile, a practical CSSI system is proposed for this demonstration. An optical wedge is employed to deflect the incident beam for subaperture scanning by its rotation and shift instead of the six-axis motion-control system. Also the reference path can provide variable Zernike defocus for each subaperture test, which would decrease the fringe density. Experiments validating the SROR algorithm in this CSSI is implemented with cross validation by testing of paraboloidal mirror, flat mirror and astigmatism mirror. It is an indispensable supplement in SROR application in general subaperture stitching interferometry.

  6. Quantum Stirling heat engine and refrigerator with single and coupled spin systems

    NASA Astrophysics Data System (ADS)

    Huang, Xiao-Li; Niu, Xin-Ya; Xiu, Xiao-Ming; Yi, Xue-Xi

    2014-02-01

    We study the reversible quantum Stirling cycle with a single spin or two coupled spins as the working substance. With the single spin as the working substance, we find that under certain conditions the reversed cycle of a heat engine is NOT a refrigerator, this feature holds true for a Stirling heat engine with an ion trapped in a shallow potential as its working substance. The efficiency of quantum Stirling heat engine can be higher than the efficiency of the Carnot engine, but the performance coefficient of the quantum Stirling refrigerator is always lower than its classical counterpart. With two coupled spins as the working substance, we find that a heat engine can turn to a refrigerator due to the increasing of the coupling constant, this can be explained by the properties of the isothermal line in the magnetic field-entropy plane.

  7. Windowed time-reversal music technique for super-resolution ultrasound imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Labyed, Yassin

    Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements.

  8. Computer Design Technology of the Small Thrust Rocket Engines Using CAE / CAD Systems

    NASA Astrophysics Data System (ADS)

    Ryzhkov, V.; Lapshin, E.

    2018-01-01

    The paper presents an algorithm for designing liquid small thrust rocket engine, the process of which consists of five aggregated stages with feedback. Three stages of the algorithm provide engineering support for design, and two stages - the actual engine design. A distinctive feature of the proposed approach is a deep study of the main technical solutions at the stage of engineering analysis and interaction with the created knowledge (data) base, which accelerates the process and provides enhanced design quality. The using multifunctional graphic package Siemens NX allows to obtain the final product -rocket engine and a set of design documentation in a fairly short time; the engine design does not require a long experimental development.

  9. Coarse-graining and self-dissimilarity of complex networks

    NASA Astrophysics Data System (ADS)

    Itzkovitz, Shalev; Levitt, Reuven; Kashtan, Nadav; Milo, Ron; Itzkovitz, Michael; Alon, Uri

    2005-01-01

    Can complex engineered and biological networks be coarse-grained into smaller and more understandable versions in which each node represents an entire pattern in the original network? To address this, we define coarse-graining units as connectivity patterns which can serve as the nodes of a coarse-grained network and present algorithms to detect them. We use this approach to systematically reverse-engineer electronic circuits, forming understandable high-level maps from incomprehensible transistor wiring: first, a coarse-grained version in which each node is a gate made of several transistors is established. Then the coarse-grained network is itself coarse-grained, resulting in a high-level blueprint in which each node is a circuit module made of many gates. We apply our approach also to a mammalian protein signal-transduction network, to find a simplified coarse-grained network with three main signaling channels that resemble multi-layered perceptrons made of cross-interacting MAP-kinase cascades. We find that both biological and electronic networks are “self-dissimilar,” with different network motifs at each level. The present approach may be used to simplify a variety of directed and nondirected, natural and designed networks.

  10. Optimal Solution for an Engineering Applications Using Modified Artificial Immune System

    NASA Astrophysics Data System (ADS)

    Padmanabhan, S.; Chandrasekaran, M.; Ganesan, S.; patan, Mahamed Naveed Khan; Navakanth, Polina

    2017-03-01

    An Engineering optimization leads a essential role in several engineering application areas like process design, product design, re-engineering and new product development, etc. In engineering, an awfully best answer is achieved by comparison to some completely different solutions by utilization previous downside information. An optimization algorithms provide systematic associate degreed economical ways that within which of constructing and comparison new design solutions so on understand at best vogue, thus on best solution efficiency and acquire the foremost wonderful design impact. In this paper, a new evolutionary based Modified Artificial Immune System (MAIS) algorithm used to optimize an engineering application of gear drive design. The results are compared with existing design.

  11. Performance seeking control (PSC) for the F-15 highly integrated digital electronic control (HIDEC) aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.

    1995-01-01

    The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.

  12. Disk Crack Detection for Seeded Fault Engine Test

    NASA Technical Reports Server (NTRS)

    Luo, Huageng; Rodriguez, Hector; Hallman, Darren; Corbly, Dennis; Lewicki, David G. (Technical Monitor)

    2004-01-01

    Work was performed to develop and demonstrate vibration diagnostic techniques for the on-line detection of engine rotor disk cracks and other anomalies through a real engine test. An existing single-degree-of-freedom non-resonance-based vibration algorithm was extended to a multi-degree-of-freedom model. In addition, a resonance-based algorithm was also proposed for the case of one or more resonances. The algorithms were integrated into a diagnostic system using state-of-the- art commercial analysis equipment. The system required only non-rotating vibration signals, such as accelerometers and proximity probes, and the rotor shaft 1/rev signal to conduct the health monitoring. Before the engine test, the integrated system was tested in the laboratory by using a small rotor with controlled mass unbalances. The laboratory tests verified the system integration and both the non-resonance and the resonance-based algorithm implementations. In the engine test, the system concluded that after two weeks of cycling, the seeded fan disk flaw did not propagate to a large enough size to be detected by changes in the synchronous vibration. The unbalance induced by mass shifting during the start up and coast down was still the dominant response in the synchronous vibration.

  13. A comparison of two adaptive algorithms for the control of active engine mounts

    NASA Astrophysics Data System (ADS)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  14. Parallel approach on sorting of genes in search of optimal solution.

    PubMed

    Kumar, Pranav; Sahoo, G

    2018-05-01

    An important tool for comparing genome analysis is the rearrangement event that can transform one given genome into other. For finding minimum sequence of fission and fusion, we have proposed here an algorithm and have shown a transformation example for converting the source genome into the target genome. The proposed algorithm comprises of circular sequence i.e. "cycle graph" in place of mapping. The main concept of algorithm is based on optimal result of permutation. These sorting processes are performed in constant running time by showing permutation in the form of cycle. In biological instances it has been observed that transposition occurs half of the frequency as that of reversal. In this paper we are not dealing with reversal instead commencing with the rearrangement of fission, fusion as well as transposition. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Pilot symbol-assisted beamforming algorithms in the WCDMA reverse link

    NASA Astrophysics Data System (ADS)

    Kong, Dongkeon; Lee, Jong H.; Chun, Joohwan; Woo, Yeon Sik; Soh, Ju Won

    2001-08-01

    We present a pilot symbol-assisted beamforming algorithm and a simulation tool of smart antennas for Wideband Code Division Multiple Access (WCDMA) in reverse link. In the 3GPP WCDMA system smart antenna technology has more room to play with than in the second generation wireless mobile systems such as IS-95 because the pilot symbol in Dedicated Physical Control Channel (DPCCH) can be utilized. First we show a smart antenna structure and adaptation algorithms, and then we explain a low-level smart antenna implementation using Simulink and MATLAB. In the design of our smart antenna system we pay special attention for the easiness of the interface to the baseband modem; Our ultimate goal is to implement a baseband smart antenna chip sets that can easily be added to to-be-existed baseband WCDMA modem units.

  16. Automated Scoring of Chinese Engineering Students' English Essays

    ERIC Educational Resources Information Center

    Liu, Ming; Wang, Yuqi; Xu, Weiwei; Liu, Li

    2017-01-01

    The number of Chinese engineering students has increased greatly since 1999. Rating the quality of these students' English essays has thus become time-consuming and challenging. This paper presents a novel automatic essay scoring algorithm called PSOSVR, based on a machine learning algorithm, Support Vector Machine for Regression (SVR), and a…

  17. The Structure-Mapping Engine: Algorithm and Examples.

    ERIC Educational Resources Information Center

    Falkenhainer, Brian; And Others

    This description of the Structure-Mapping Engine (SME), a flexible, cognitive simulation program for studying analogical processing which is based on Gentner's Structure-Mapping theory of analogy, points out that the SME provides a "tool kit" for constructing matching algorithms consistent with this theory. This report provides: (1) a…

  18. NASA Acting Deputy Chief Technologist Briefed on Operation of Sonic Boom Prediction Algorithms

    NASA Image and Video Library

    2017-08-29

    NASA Acting Deputy Chief Technologist Vicki Crips being briefed by Tim Cox, Controls Engineer at NASA’s Armstrong Flight Research Center at Edwards, California, on the operation of the sonic boom prediction algorithms being used in engineering simulation for the NASA Supersonic Quest program.

  19. Cone-Beam Computed Tomography for Image-Guided Radiation Therapy of Prostate Cancer

    DTIC Science & Technology

    2008-01-01

    forexa t volumetri image re onstru tion. As a onsequense, images re onstru ted by approx-imate algorithms, mostly based on the Feldkamp algorithm...patient dose from CBCT. Reverse heli al CBCT has been developed for exa tre onstru tion of volumetri images, region-of-interest (ROI) re onstru tion...algorithm with a priori informa-tion in few-view CBCT for IGRT. We expe t the proposed algorithm an redu e the numberof proje tions needed for volumetri

  20. Comparing the ISO-recommended and the cumulative data-reduction algorithms in S-on-1 laser damage test by a reverse approach method

    NASA Astrophysics Data System (ADS)

    Zorila, Alexandru; Stratan, Aurel; Nemes, George

    2018-01-01

    We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.

  1. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm

    PubMed Central

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as “reversal”. Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one’s finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects’ performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects’ Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise. PMID:26849058

  2. Nested effects models for learning signaling networks from perturbation data.

    PubMed

    Fröhlich, Holger; Tresch, Achim; Beissbarth, Tim

    2009-04-01

    Targeted gene perturbations have become a major tool to gain insight into complex cellular processes. In combination with the measurement of downstream effects via DNA microarrays, this approach can be used to gain insight into signaling pathways. Nested Effects Models were first introduced by Markowetz et al. as a probabilistic method to reverse engineer signaling cascades based on the nested structure of downstream perturbation effects. The basic framework was substantially extended later on by Fröhlich et al., Markowetz et al., and Tresch and Markowetz. In this paper, we present a review of the complete methodology with a detailed comparison of so far proposed algorithms on a qualitative and quantitative level. As an application, we present results on estimating the signaling network between 13 genes in the ER-alpha pathway of human MCF-7 breast cancer cells. Comparison with the literature shows a substantial overlap.

  3. Wisdom of crowds for robust gene network inference

    PubMed Central

    Marbach, Daniel; Costello, James C.; Küffner, Robert; Vega, Nicci; Prill, Robert J.; Camacho, Diogo M.; Allison, Kyle R.; Kellis, Manolis; Collins, James J.; Stolovitzky, Gustavo

    2012-01-01

    Reconstructing gene regulatory networks from high-throughput data is a long-standing problem. Through the DREAM project (Dialogue on Reverse Engineering Assessment and Methods), we performed a comprehensive blind assessment of over thirty network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae, and in silico microarray data. We characterize performance, data requirements, and inherent biases of different inference approaches offering guidelines for both algorithm application and development. We observe that no single inference method performs optimally across all datasets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse datasets. Thereby, we construct high-confidence networks for E. coli and S. aureus, each comprising ~1700 transcriptional interactions at an estimated precision of 50%. We experimentally test 53 novel interactions in E. coli, of which 23 were supported (43%). Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks. PMID:22796662

  4. Metabolomic Profiling as a Possible Reverse Engineering Tool for Estimating Processing Conditions of Dry-Cured Hams.

    PubMed

    Sugimoto, Masahiro; Obiya, Shinichi; Kaneko, Miku; Enomoto, Ayame; Honma, Mayu; Wakayama, Masataka; Soga, Tomoyoshi; Tomita, Masaru

    2017-01-18

    Dry-cured hams are popular among consumers. To increase the attractiveness of the product, objective analytical methods and algorithms to evaluate the relationship between observable properties and consumer acceptability are required. In this study, metabolomics, which is used for quantitative profiling of hundreds of small molecules, was applied to 12 kinds of dry-cured hams from Japan and Europe. In total, 203 charged metabolites, including amino acids, organic acids, nucleotides, and peptides, were successfully identified and quantified. Metabolite profiles were compared for the samples with different countries of origin and processing methods (e.g., smoking or use of a starter culture). Principal component analysis of the metabolite profiles with sensory properties revealed significant correlations for redness, homogeneity, and fat whiteness. This approach could be used to design new ham products by objective evaluation of various features.

  5. Using a Formal Approach for Reverse Engineering and Design Recovery to Support Software Reuse

    NASA Technical Reports Server (NTRS)

    Gannod, Gerald C.

    2002-01-01

    This document describes 3rd year accomplishments and summarizes overall project accomplishments. Included as attachments are all published papers from year three. Note that the budget for this project was discontinued after year two, but that a residual budget from year two allowed minimal continuance into year three. Accomplishments include initial investigations into log-file based reverse engineering, service-based software reuse, and a source to XML generator.

  6. Variable Cycle Intake for Reverse Core Engine

    NASA Technical Reports Server (NTRS)

    Chandler, Jesse M (Inventor); Staubach, Joseph B (Inventor); Suciu, Gabriel L (Inventor)

    2016-01-01

    A gas generator for a reverse core engine propulsion system has a variable cycle intake for the gas generator, which variable cycle intake includes a duct system. The duct system is configured for being selectively disposed in a first position and a second position, wherein free stream air is fed to the gas generator when in the first position, and fan stream air is fed to the gas generator when in the second position.

  7. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  8. Design of a microprocessor-based Control, Interface and Monitoring (CIM unit for turbine engine controls research

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Soeder, J. F.

    1983-01-01

    High speed minicomputers were used in the past to implement advanced digital control algorithms for turbine engines. These minicomputers are typically large and expensive. It is desirable for a number of reasons to use microprocessor-based systems for future controls research. They are relatively compact, inexpensive, and are representative of the hardware that would be used for actual engine-mounted controls. The Control, Interface, and Monitoring Unit (CIM) contains a microprocessor-based controls computer, necessary interface hardware and a system to monitor while it is running an engine. It is presently being used to evaluate an advanced turbofan engine control algorithm.

  9. Iterative procedures for space shuttle main engine performance models

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1989-01-01

    Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.

  10. Brain in situ hybridization maps as a source for reverse-engineering transcriptional regulatory networks: Alzheimer's disease insights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acquaah-Mensah, George K.; Taylor, Ronald C.

    Microarray data have been a valuable resource for identifying transcriptional regulatory relationships among genes. As an example, brain region-specific transcriptional regulatory events have the potential of providing etiological insights into Alzheimer Disease (AD). However, there is often a paucity of suitable brain-region specific expression data obtained via microarrays or other high throughput means. The Allen Brain Atlas in situ hybridization (ISH) data sets (Jones et al., 2009) represent a potentially valuable alternative source of high-throughput brain region-specific gene expression data for such purposes. In this study, Allen BrainAtlasmouse ISH data in the hippocampal fields were extracted, focusing on 508 genesmore » relevant to neurodegeneration. Transcriptional regulatory networkswere learned using three high-performing network inference algorithms. Only 17% of regulatory edges from a network reverse-engineered based on brain region-specific ISH data were also found in a network constructed upon gene expression correlations inmousewhole brain microarrays, thus showing the specificity of gene expression within brain sub-regions. Furthermore, the ISH data-based networks were used to identify instructive transcriptional regulatory relationships. Ncor2, Sp3 and Usf2 form a unique three-party regulatory motif, potentially affecting memory formation pathways. Nfe2l1, Egr1 and Usf2 emerge among regulators of genes involved in AD (e.g. Dhcr24, Aplp2, Tia1, Pdrx1, Vdac1, andSyn2). Further, Nfe2l1, Egr1 and Usf2 are sensitive to dietary factors and could be among links between dietary influences and genes in the AD etiology. Thus, this approach of harnessing brain region-specific ISH data represents a rare opportunity for gleaning unique etiological insights for diseases such as AD.« less

  11. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  12. Inferring Broad Regulatory Biology from Time Course Data: Have We Reached an Upper Bound under Constraints Typical of In Vivo Studies?

    PubMed Central

    Craddock, Travis J. A.; Fletcher, Mary Ann; Klimas, Nancy G.

    2015-01-01

    There is a growing appreciation for the network biology that regulates the coordinated expression of molecular and cellular markers however questions persist regarding the identifiability of these networks. Here we explore some of the issues relevant to recovering directed regulatory networks from time course data collected under experimental constraints typical of in vivo studies. NetSim simulations of sparsely connected biological networks were used to evaluate two simple feature selection techniques used in the construction of linear Ordinary Differential Equation (ODE) models, namely truncation of terms versus latent vector projection. Performance was compared with ODE-based Time Series Network Identification (TSNI) integral, and the information-theoretic Time-Delay ARACNE (TD-ARACNE). Projection-based techniques and TSNI integral outperformed truncation-based selection and TD-ARACNE on aggregate networks with edge densities of 10-30%, i.e. transcription factor, protein-protein cliques and immune signaling networks. All were more robust to noise than truncation-based feature selection. Performance was comparable on the in silico 10-node DREAM 3 network, a 5-node Yeast synthetic network designed for In vivo Reverse-engineering and Modeling Assessment (IRMA) and a 9-node human HeLa cell cycle network of similar size and edge density. Performance was more sensitive to the number of time courses than to sample frequency and extrapolated better to larger networks by grouping experiments. In all cases performance declined rapidly in larger networks with lower edge density. Limited recovery and high false positive rates obtained overall bring into question our ability to generate informative time course data rather than the design of any particular reverse engineering algorithm. PMID:25984725

  13. Cooperative Multi-Agent Mobile Sensor Platforms for Jet Engine Inspection: Concept and Implementation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Wong, Edmond; Krasowski, Michael J.; Greer, Lawrence C.

    2003-01-01

    Cooperative behavior algorithms utilizing swarm intelligence are being developed for mobile sensor platforms to inspect jet engines on-wing. Experiments are planned in which several relatively simple autonomous platforms will work together in a coordinated fashion to carry out complex maintenance-type tasks within the constrained working environment modeled on the interior of a turbofan engine. The algorithms will emphasize distribution of the tasks among multiple units; they will be scalable and flexible so that units may be added in the future; and will be designed to operate on an individual unit level to produce the desired global effect. This proof of concept demonstration will validate the algorithms and provide justification for further miniaturization and specialization of the hardware toward the true application of on-wing in situ turbine engine maintenance.

  14. Health management system for rocket engines

    NASA Technical Reports Server (NTRS)

    Nemeth, Edward

    1990-01-01

    The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.

  15. Reconstructing photorealistic 3D models from image sequence using domain decomposition method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2009-11-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.

  16. Techniques utilized in the simulated altitude testing of a 2D-CD vectoring and reversing nozzle

    NASA Technical Reports Server (NTRS)

    Block, H. Bruce; Bryant, Lively; Dicus, John H.; Moore, Allan S.; Burns, Maureen E.; Solomon, Robert F.; Sheer, Irving

    1988-01-01

    Simulated altitude testing of a two-dimensional, convergent-divergent, thrust vectoring and reversing exhaust nozzle was accomplished. An important objective of this test was to develop test hardware and techniques to properly operate a vectoring and reversing nozzle within the confines of an altitude test facility. This report presents detailed information on the major test support systems utilized, the operational performance of the systems and the problems encountered, and test equipment improvements recommended for future tests. The most challenging support systems included the multi-axis thrust measurement system, vectored and reverse exhaust gas collection systems, and infrared temperature measurement systems used to evaluate and monitor the nozzle. The feasibility of testing a vectoring and reversing nozzle of this type in an altitude chamber was successfully demonstrated. Supporting systems performed as required. During reverser operation, engine exhaust gases were successfully captured and turned downstream. However, a small amount of exhaust gas spilled out the collector ducts' inlet openings when the reverser was opened more than 60 percent. The spillage did not affect engine or nozzle performance. The three infrared systems which viewed the nozzle through the exhaust collection system worked remarkably well considering the harsh environment.

  17. 77 FR 40026 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-06

    ... and contractor logistics, Quality Assurance Team support services, engineering and technical support..., engineering and technical support, and other related elements of program support. The estimated cost is $49..., maintenance, or training is Confidential. Reverse engineering could reveal Confidential information...

  18. System Engineering of Autonomous Space Vehicles

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Johnson, Stephen B.; Trevino, Luis

    2014-01-01

    Human exploration of the solar system requires fully autonomous systems when travelling more than 5 light minutes from Earth. This autonomy is necessary to manage a large, complex spacecraft with limited crew members and skills available. The communication latency requires the vehicle to deal with events with only limited crew interaction in most cases. The engineering of these systems requires an extensive knowledge of the spacecraft systems, information theory, and autonomous algorithm characteristics. The characteristics of the spacecraft systems must be matched with the autonomous algorithm characteristics to reliably monitor and control the system. This presents a large system engineering problem. Recent work on product-focused, elegant system engineering will be applied to this application, looking at the full autonomy stack, the matching of autonomous systems to spacecraft systems, and the integration of different types of algorithms. Each of these areas will be outlined and a general approach defined for system engineering to provide the optimal solution to the given application context.

  19. Biotechnological applications of mobile group II introns and their reverse transcriptases: gene targeting, RNA-seq, and non-coding RNA analysis.

    PubMed

    Enyeart, Peter J; Mohr, Georg; Ellington, Andrew D; Lambowitz, Alan M

    2014-01-13

    Mobile group II introns are bacterial retrotransposons that combine the activities of an autocatalytic intron RNA (a ribozyme) and an intron-encoded reverse transcriptase to insert site-specifically into DNA. They recognize DNA target sites largely by base pairing of sequences within the intron RNA and achieve high DNA target specificity by using the ribozyme active site to couple correct base pairing to RNA-catalyzed intron integration. Algorithms have been developed to program the DNA target site specificity of several mobile group II introns, allowing them to be made into 'targetrons.' Targetrons function for gene targeting in a wide variety of bacteria and typically integrate at efficiencies high enough to be screened easily by colony PCR, without the need for selectable markers. Targetrons have found wide application in microbiological research, enabling gene targeting and genetic engineering of bacteria that had been intractable to other methods. Recently, a thermostable targetron has been developed for use in bacterial thermophiles, and new methods have been developed for using targetrons to position recombinase recognition sites, enabling large-scale genome-editing operations, such as deletions, inversions, insertions, and 'cut-and-pastes' (that is, translocation of large DNA segments), in a wide range of bacteria at high efficiency. Using targetrons in eukaryotes presents challenges due to the difficulties of nuclear localization and sub-optimal magnesium concentrations, although supplementation with magnesium can increase integration efficiency, and directed evolution is being employed to overcome these barriers. Finally, spurred by new methods for expressing group II intron reverse transcriptases that yield large amounts of highly active protein, thermostable group II intron reverse transcriptases from bacterial thermophiles are being used as research tools for a variety of applications, including qRT-PCR and next-generation RNA sequencing (RNA-seq). The high processivity and fidelity of group II intron reverse transcriptases along with their novel template-switching activity, which can directly link RNA-seq adaptor sequences to cDNAs during reverse transcription, open new approaches for RNA-seq and the identification and profiling of non-coding RNAs, with potentially wide applications in research and biotechnology.

  20. GeoSearcher: Location-Based Ranking of Search Engine Results.

    ERIC Educational Resources Information Center

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  1. A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu

    Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.

  2. Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Bruton, William M.

    1987-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.

  3. Matlab GUI for a Fluid Mixer

    NASA Technical Reports Server (NTRS)

    Barbieri, Enrique

    2005-01-01

    The Test and Engineering Directorate at NASA John C. Stennis Space Center developed an interest to study the modeling, evaluation, and control of a liquid hydrogen (LH2) and gas hydrogen (GH2) mixer subsystem of a ground test facility. This facility carries out comprehensive ground-based testing and certification of liquid rocket engines including the Space Shuttle Main engine. A software simulation environment developed in MATLAB/SIMULINK (M/S) will allow NASA engineers to test rocket engine systems at relatively no cost. In the progress report submitted in February 2004, we described the development of two foundation programs, a reverse look-up application using various interpolation algorithms, a variety of search and return methods, and self-checking methods to reduce the error in returned search results to increase the functionality of the program. The results showed that these efforts were successful. To transfer this technology to engineers who are not familiar with the M/S environment, a four-module GUI was implemented allowing the user to evaluate the mixer model under open-loop and closed-loop conditions. The progress report was based on an udergraduate Honors Thesis by Ms. Jamie Granger Austin in the Department of Electrical Engineering and Computer Science at Tulane University, during January-May 2003, and her continued efforts during August-December 2003. In collaboration with Dr. Hanz Richter and Dr. Fernando Figueroa we published these results in a NASA Tech Brief due to appear this year. Although the original proposal in 2003 did not address other components of the test facility, we decided in the last few months to extend our research and consider a related pressurization tank component as well. This report summarizes the results obtained towards a Graphical User Interface (GUI) for the evaluation and control of the hydrogen mixer subsystem model and for the pressurization tank each taken individually. Further research would combine the two components - mixer and tank, for a more realistic simulation tool.

  4. Designing of routing algorithms in autonomous distributed data transmission system for mobile computing devices with ‘WiFi-Direct’ technology

    NASA Astrophysics Data System (ADS)

    Nikitin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.; Botygin, I. A.

    2017-02-01

    The results of the research of existent routing protocols in wireless networks and their main features are discussed in the paper. Basing on the protocol data, the routing protocols in wireless networks, including search routing algorithms and phone directory exchange algorithms, are designed with the ‘WiFi-Direct’ technology. Algorithms without IP-protocol were designed, and that enabled one to increase the efficiency of the algorithms while working only with the MAC-addresses of the devices. The developed algorithms are expected to be used in the mobile software engineering with the Android platform taken as base. Easier algorithms and formats of the well-known route protocols, rejection of the IP-protocols enables to use the developed protocols on more primitive mobile devices. Implementation of the protocols to the engineering industry enables to create data transmission networks among working places and mobile robots without any access points.

  5. CONDENSED MATTER: ELECTRONIC STRUCTURE, ELECTRICAL, MAGNETIC, AND OPTICAL PROPERTIES: Research on reverse recovery characteristics of SiGeC p-i-n diodes

    NASA Astrophysics Data System (ADS)

    Gao, Yong; Liu, Jing; Yang, Yuan

    2008-12-01

    This paper analyses the reverse recovery characteristics and mechanism of SiGeC p-i-n diodes. Based on the integrated systems engineering (ISE) data, the critical physical models of SiGeC diodes are proposed. Based on hetero-junction band gap engineering, the softness factor increases over six times, reverse recovery time is over 30% short and there is a 20% decrease in peak reverse recovery current for SiGeC diodes with 20% of germanium and 0.5% of carbon, compared to Si diodes. Those advantages of SiGeC p-i-n diodes are more obvious at high temperature. Compared to lifetime control, SiGeC technique is more suitable for improving diode properties and the tradeoff between reverse recovery time and forward voltage drop can be easily achieved in SiGeC diodes. Furthermore, the high thermal-stability of SiGeC diodes reduces the costs of further process steps and offers more freedoms to device design.

  6. Data Synchronization Discrepancies in a Formation Flight Control System

    NASA Technical Reports Server (NTRS)

    Ryan, Jack; Hanson, Curtis E.; Norlin, Ken A.; Allen, Michael J.; Schkolnik, Gerard (Technical Monitor)

    2001-01-01

    Aircraft hardware-in-the-loop simulation is an invaluable tool to flight test engineers; it reveals design and implementation flaws while operating in a controlled environment. Engineers, however, must always be skeptical of the results and analyze them within their proper context. Engineers must carefully ascertain whether an anomaly that occurs in the simulation will also occur in flight. This report presents a chronology illustrating how misleading simulation timing problems led to the implementation of an overly complex position data synchronization guidance algorithm in place of a simpler one. The report illustrates problems caused by the complex algorithm and how the simpler algorithm was chosen in the end. Brief descriptions of the project objectives, approach, and simulation are presented. The misleading simulation results and the conclusions then drawn are presented. The complex and simple guidance algorithms are presented with flight data illustrating their relative success.

  7. A novel technique for presurgical nasoalveolar molding using computer-aided reverse engineering and rapid prototyping.

    PubMed

    Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang

    2011-01-01

    To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.

  8. Reversing the Trend of Engineering Enrollment Declines with Innovative Outreach, Recruiting, and Retention Programs

    ERIC Educational Resources Information Center

    Davis, C. E.; Yeary, M. B.; Sluss, J. J., Jr.

    2012-01-01

    This paper discusses an all-encompassing approach to increase the number of students in engineering through innovative outreach, recruiting, and retention programs. Prior to adopting these programs, the School of Electrical and Computer Engineering (ECE) at the University of Oklahoma (OU), Norman, experienced a reduction in engineering enrollment…

  9. Combining phage display with de novo protein sequencing for reverse engineering of monoclonal antibodies.

    PubMed

    Rickert, Keith W; Grinberg, Luba; Woods, Robert M; Wilson, Susan; Bowen, Michael A; Baca, Manuel

    2016-01-01

    The enormous diversity created by gene recombination and somatic hypermutation makes de novo protein sequencing of monoclonal antibodies a uniquely challenging problem. Modern mass spectrometry-based sequencing will rarely, if ever, provide a single unambiguous sequence for the variable domains. A more likely outcome is computation of an ensemble of highly similar sequences that can satisfy the experimental data. This outcome can result in the need for empirical testing of many candidate sequences, sometimes iteratively, to identity one which can replicate the activity of the parental antibody. Here we describe an improved approach to antibody protein sequencing by using phage display technology to generate a combinatorial library of sequences that satisfy the mass spectrometry data, and selecting for functional candidates that bind antigen. This approach was used to reverse engineer 2 commercially-obtained monoclonal antibodies against murine CD137. Proteomic data enabled us to assign the majority of the variable domain sequences, with the exception of 3-5% of the sequence located within or adjacent to complementarity-determining regions. To efficiently resolve the sequence in these regions, small phage-displayed libraries were generated and subjected to antigen binding selection. Following enrichment of antigen-binding clones, 2 clones were selected for each antibody and recombinantly expressed as antigen-binding fragments (Fabs). In both cases, the reverse-engineered Fabs exhibited identical antigen binding affinity, within error, as Fabs produced from the commercial IgGs. This combination of proteomic and protein engineering techniques provides a useful approach to simplifying the technically challenging process of reverse engineering monoclonal antibodies from protein material.

  10. Combining phage display with de novo protein sequencing for reverse engineering of monoclonal antibodies

    PubMed Central

    Rickert, Keith W.; Grinberg, Luba; Woods, Robert M.; Wilson, Susan; Bowen, Michael A.; Baca, Manuel

    2016-01-01

    ABSTRACT The enormous diversity created by gene recombination and somatic hypermutation makes de novo protein sequencing of monoclonal antibodies a uniquely challenging problem. Modern mass spectrometry-based sequencing will rarely, if ever, provide a single unambiguous sequence for the variable domains. A more likely outcome is computation of an ensemble of highly similar sequences that can satisfy the experimental data. This outcome can result in the need for empirical testing of many candidate sequences, sometimes iteratively, to identity one which can replicate the activity of the parental antibody. Here we describe an improved approach to antibody protein sequencing by using phage display technology to generate a combinatorial library of sequences that satisfy the mass spectrometry data, and selecting for functional candidates that bind antigen. This approach was used to reverse engineer 2 commercially-obtained monoclonal antibodies against murine CD137. Proteomic data enabled us to assign the majority of the variable domain sequences, with the exception of 3–5% of the sequence located within or adjacent to complementarity-determining regions. To efficiently resolve the sequence in these regions, small phage-displayed libraries were generated and subjected to antigen binding selection. Following enrichment of antigen-binding clones, 2 clones were selected for each antibody and recombinantly expressed as antigen-binding fragments (Fabs). In both cases, the reverse-engineered Fabs exhibited identical antigen binding affinity, within error, as Fabs produced from the commercial IgGs. This combination of proteomic and protein engineering techniques provides a useful approach to simplifying the technically challenging process of reverse engineering monoclonal antibodies from protein material. PMID:26852694

  11. Static internal performance of a single-engine onaxisymmetric-nozzle vaned-thrust-reverser design with thrust modulation capabilities

    NASA Technical Reports Server (NTRS)

    Leavitt, L. D.; Burley, J. R., II

    1985-01-01

    An investigation has been conducted at wind-off conditions in the stati-test facility of the Langley 16-Foot Transonic Tunnel. The tests were conducted on a single-engine reverser configuration with partial and full reverse-thrust modulation capabilities. The reverser design had four ports with equal areas. These ports were angled outboard 30 deg from the vertical impart of a splay angle to the reverse exhaust flow. This splaying of reverser flow was intended to prevent impingement of exhaust flow on empennage surfaces and to help avoid inlet reingestion of exhaust gas when the reverser is integrated into an actual airplane configuration. External vane boxes were located directly over each of the four ports to provide variation of reverser efflux angle from 140 deg to 26 deg (measured forward from the horizontal reference axis). The reverser model was tested with both a butterfly-type inner door and an internal slider door to provide area control for each individual port. In addition, main nozzle throat area and vector angle were varied to examine various methods of modulating thrust levels. Other model variables included vane box configuration (four or six vanes per box), orientation of external vane boxes with respect to internal port walls (splay angle shims), and vane box sideplates. Nozzle pressure ratio was varied from 2.0 approximately 7.0.

  12. Design of a TDOA location engine and development of a location system based on chirp spread spectrum.

    PubMed

    Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang

    2016-01-01

    Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.

  13. Fast adaptive flat-histogram ensemble to enhance the sampling in large systems

    NASA Astrophysics Data System (ADS)

    Xu, Shun; Zhou, Xin; Jiang, Yi; Wang, YanTing

    2015-09-01

    An efficient novel algorithm was developed to estimate the Density of States (DOS) for large systems by calculating the ensemble means of an extensive physical variable, such as the potential energy, U, in generalized canonical ensembles to interpolate the interior reverse temperature curve , where S( U) is the logarithm of the DOS. This curve is computed with different accuracies in different energy regions to capture the dependence of the reverse temperature on U without setting prior grid in the U space. By combining with a U-compression transformation, we decrease the computational complexity from O( N 3/2) in the normal Wang Landau type method to O( N 1/2) in the current algorithm, as the degrees of freedom of system N. The efficiency of the algorithm is demonstrated by applying to Lennard Jones fluids with various N, along with its ability to find different macroscopic states, including metastable states.

  14. Incoherent dictionary learning for reducing crosstalk noise in least-squares reverse time migration

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-05-01

    We propose to apply a novel incoherent dictionary learning (IDL) algorithm for regularizing the least-squares inversion in seismic imaging. The IDL is proposed to overcome the drawback of traditional dictionary learning algorithm in losing partial texture information. Firstly, the noisy image is divided into overlapped image patches, and some random patches are extracted for dictionary learning. Then, we apply the IDL technology to minimize the coherency between atoms during dictionary learning. Finally, the sparse representation problem is solved by a sparse coding algorithm, and image is restored by those sparse coefficients. By reducing the correlation among atoms, it is possible to preserve most of the small-scale features in the image while removing much of the long-wavelength noise. The application of the IDL method to regularization of seismic images from least-squares reverse time migration shows successful performance.

  15. CAD system of design and engineering provision of die forming of compressor blades for aircraft engines

    NASA Astrophysics Data System (ADS)

    Khaimovich, I. N.

    2017-10-01

    The articles provides the calculation algorithms for blank design and die forming fitting to produce the compressor blades for aircraft engines. The design system proposed in the article allows generating drafts of trimming and reducing dies automatically, leading to significant reduction of work preparation time. The detailed analysis of the blade structural elements features was carried out, the taken limitations and technological solutions allowed forming generalized algorithms of forming parting stamp face over the entire circuit of the engraving for different configurations of die forgings. The author worked out the algorithms and programs to calculate three dimensional point locations describing the configuration of die cavity. As a result the author obtained the generic mathematical model of final die block in the form of three-dimensional array of base points. This model is the base for creation of engineering documentation of technological equipment and means of its control.

  16. De novo reconstruction of gene regulatory networks from time series data, an approach based on formal methods.

    PubMed

    Ceccarelli, Michele; Cerulo, Luigi; Santone, Antonella

    2014-10-01

    Reverse engineering of gene regulatory relationships from genomics data is a crucial task to dissect the complex underlying regulatory mechanism occurring in a cell. From a computational point of view the reconstruction of gene regulatory networks is an undetermined problem as the large number of possible solutions is typically high in contrast to the number of available independent data points. Many possible solutions can fit the available data, explaining the data equally well, but only one of them can be the biologically true solution. Several strategies have been proposed in literature to reduce the search space and/or extend the amount of independent information. In this paper we propose a novel algorithm based on formal methods, mathematically rigorous techniques widely adopted in engineering to specify and verify complex software and hardware systems. Starting with a formal specification of gene regulatory hypotheses we are able to mathematically prove whether a time course experiment belongs or not to the formal specification, determining in fact whether a gene regulation exists or not. The method is able to detect both direction and sign (inhibition/activation) of regulations whereas most of literature methods are limited to undirected and/or unsigned relationships. We empirically evaluated the approach on experimental and synthetic datasets in terms of precision and recall. In most cases we observed high levels of accuracy outperforming the current state of art, despite the computational cost increases exponentially with the size of the network. We made available the tool implementing the algorithm at the following url: http://www.bioinformatics.unisannio.it. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Efficient Reverse-Engineering of a Developmental Gene Regulatory Network

    PubMed Central

    Cicin-Sain, Damjan; Ashyraliyev, Maksat; Jaeger, Johannes

    2012-01-01

    Understanding the complex regulatory networks underlying development and evolution of multi-cellular organisms is a major problem in biology. Computational models can be used as tools to extract the regulatory structure and dynamics of such networks from gene expression data. This approach is called reverse engineering. It has been successfully applied to many gene networks in various biological systems. However, to reconstitute the structure and non-linear dynamics of a developmental gene network in its spatial context remains a considerable challenge. Here, we address this challenge using a case study: the gap gene network involved in segment determination during early development of Drosophila melanogaster. A major problem for reverse-engineering pattern-forming networks is the significant amount of time and effort required to acquire and quantify spatial gene expression data. We have developed a simplified data processing pipeline that considerably increases the throughput of the method, but results in data of reduced accuracy compared to those previously used for gap gene network inference. We demonstrate that we can infer the correct network structure using our reduced data set, and investigate minimal data requirements for successful reverse engineering. Our results show that timing and position of expression domain boundaries are the crucial features for determining regulatory network structure from data, while it is less important to precisely measure expression levels. Based on this, we define minimal data requirements for gap gene network inference. Our results demonstrate the feasibility of reverse-engineering with much reduced experimental effort. This enables more widespread use of the method in different developmental contexts and organisms. Such systematic application of data-driven models to real-world networks has enormous potential. Only the quantitative investigation of a large number of developmental gene regulatory networks will allow us to discover whether there are rules or regularities governing development and evolution of complex multi-cellular organisms. PMID:22807664

  18. DEFINING THE PLAYERS IN HIGHER-ORDER NETWORKS: PREDICTIVE MODELING FOR REVERSE ENGINEERING FUNCTIONAL INFLUENCE NETWORKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Jason E.; Costa, Michelle N.; Stevens, S.L.

    A difficult problem that is currently growing rapidly due to the sharp increase in the amount of high-throughput data available for many systems is that of determining useful and informative causative influence networks. These networks can be used to predict behavior given observation of a small number of components, predict behavior at a future time point, or identify components that are critical to the functioning of the system under particular conditions. In these endeavors incorporating observations of systems from a wide variety of viewpoints can be particularly beneficial, but has often been undertaken with the objective of inferring networks thatmore » are generally applicable. The focus of the current work is to integrate both general observations and measurements taken for a particular pathology, that of ischemic stroke, to provide improved ability to produce useful predictions of systems behavior. A number of hybrid approaches have recently been proposed for network generation in which the Gene Ontology is used to filter or enrich network links inferred from gene expression data through reverse engineering methods. These approaches have been shown to improve the biological plausibility of the inferred relationships determined, but still treat knowledge-based and machine-learning inferences as incommensurable inputs. In this paper, we explore how further improvements may be achieved through a full integration of network inference insights achieved through application of the Gene Ontology and reverse engineering methods with specific reference to the construction of dynamic models of transcriptional regulatory networks. We show that integrating two approaches to network construction, one based on reverse-engineering from conditional transcriptional data, one based on reverse-engineering from in situ hybridization data, and another based on functional associations derived from Gene Ontology, using probabilities can improve results of clustering as evaluated by a predictive model of transcriptional expression levels.« less

  19. Engineering applications of metaheuristics: an introduction

    NASA Astrophysics Data System (ADS)

    Oliva, Diego; Hinojosa, Salvador; Demeshko, M. V.

    2017-01-01

    Metaheuristic algorithms are important tools that in recent years have been used extensively in several fields. In engineering, there is a big amount of problems that can be solved from an optimization point of view. This paper is an introduction of how metaheuristics can be used to solve complex problems of engineering. Their use produces accurate results in problems that are computationally expensive. Experimental results support the performance obtained by the selected algorithms in such specific problems as digital filter design, image processing and solar cells design.

  20. Survey of Quantification and Distance Functions Used for Internet-based Weak-link Sociological Phenomena

    DTIC Science & Technology

    2016-03-01

    well as the Yahoo search engine and a classic SearchKing HIST algorithm. The co-PI immersed herself in the sociology literature for the relevant...Google matrix, PageRank as well as the Yahoo search engine and a classic SearchKing HIST algorithm. The co-PI immersed herself in the sociology...The PI studied all mathematical literature he can find related to the Google search engine, Google matrix, PageRank as well as the Yahoo search

  1. A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Ilchi Ghazaan, M.

    2018-02-01

    In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.

  2. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    ERIC Educational Resources Information Center

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  3. Finding Patterns of Emergence in Science and Technology

    DTIC Science & Technology

    2012-09-24

    formal evaluation scheduled – Case Studies, Eight Examples: Tissue Engineering, Cold Fusion, RF Metamaterials, DNA Microarrays, Genetic Algorithms, RNAi...emerging capabilities Case Studies, Eight Examples: • Tissue Engineering, Cold Fusion, RF Metamaterials, DNA Microarrays, Genetic Algorithms...Evidence Quality (i.e., the rubric ) and deliver comprehensible evidential support for nomination • Demonstrate proof-of-concept nomination for Chinese

  4. Statistical inference approach to structural reconstruction of complex networks from binary time series

    NASA Astrophysics Data System (ADS)

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  5. Statistical inference approach to structural reconstruction of complex networks from binary time series.

    PubMed

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  6. A synthetic biology approach to engineer a functional reversal of the β-oxidation cycle.

    PubMed

    Clomburg, James M; Vick, Jacob E; Blankschien, Matthew D; Rodríguez-Moyá, María; Gonzalez, Ramon

    2012-11-16

    While we have recently constructed a functional reversal of the β-oxidation cycle as a platform for the production of fuels and chemicals by engineering global regulators and eliminating native fermentative pathways, the system-level approach used makes it difficult to determine which of the many deregulated enzymes are responsible for product synthesis. This, in turn, limits efforts to fine-tune the synthesis of specific products and prevents the transfer of the engineered pathway to other organisms. In the work reported here, we overcome the aforementioned limitations by using a synthetic biology approach to construct and functionally characterize a reversal of the β-oxidation cycle. This was achieved through the in vitro kinetic characterization of each functional unit of the core and termination pathways, followed by their in vivo assembly and functional characterization. With this approach, the four functional units of the core pathway, thiolase, 3-hydroxyacyl-CoA dehydrogenase, enoyl-CoA hydratase/3-hydroxyacyl-CoA dehydratase, and acyl-CoA dehydrogenase/trans-enoyl-CoA reductase, were purified and kinetically characterized in vitro. When these four functional units were assembled in vivo in combination with thioesterases as the termination pathway, the synthesis of a variety of 4-C carboxylic acids from a one-turn functional reversal of the β-oxidation cycle was realized. The individual expression and modular construction of these well-defined core components exerted the majority of control over product formation, with only highly selective termination pathways resulting in shifts in product formation. Further control over product synthesis was demonstrated by overexpressing a long-chain thiolase that enables the operation of multiple turns of the reversal of the β-oxidation cycle and hence the synthesis of longer-chain carboxylic acids. The well-defined and self-contained nature of each functional unit makes the engineered reversal of the β-oxidation cycle "chassis neutral" and hence transferrable to the host of choice for efficient fuel or chemical production.

  7. Quiet Clean Short-Haul Experimental Engine (QCSEE) Over-The-Wing (OTW) propulsion system test report. Volume 2: Aerodynamics and performance. [engine performance tests to define propulsion system performance on turbofan engines

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The design and testing of the over the wing engine, a high bypass, geared turbofan engine, are discussed. The propulsion system performance is examined for uninstalled performance and installed performance. The fan aerodynamic performance and the D nozzle and reverser thrust performance are evaluated.

  8. The Science of Solubility: Using Reverse Engineering to Brew a Perfect Cup of Coffee

    ERIC Educational Resources Information Center

    West, Andrew B.; Sickel, Aaron J.; Cribbs, Jennifer D.

    2015-01-01

    The Next Generation Science Standards call for the integration of science and engineering. Often, the introduction of engineering activities occurs after instruction in the science content. That is, engineering is used as a way for students to elaborate on science ideas that have already been explored. However, using only this sequence of…

  9. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.

  10. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    NASA Astrophysics Data System (ADS)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.

  11. BOLD response to semantic and syntactic processing during hypoglycemia is load-dependent.

    PubMed

    Schafer, Robin J; Page, Kathleen A; Arora, Jagriti; Sherwin, Robert; Constable, R Todd

    2012-01-01

    This study investigates how syntactic and semantic load factors impact sentence comprehension and BOLD signal under moderate hypoglycemia. A dual session, whole brain fMRI study was conducted on 16 healthy participants using the glucose clamp technique. In one session, they experienced insulin-induced hypoglycemia (plasma glucose at ∼50mg/dL); in the other, plasma glucose was maintained at euglycemic levels (∼100mg/dL). During scans subjects were presented with sentences of contrasting syntactic (embedding vs. conjunction) and semantic (reversibility vs. irreversibility) load. Semantic factors dominated the overall load effects on both performance (p<0.001) and BOLD response (p<0.01, corrected). Differential BOLD signal was observed in frontal, temporal, temporo-parietal and medio-temporal regions. Hypoglycemia and syntactic factors significantly impacted performance (p=0.002) and BOLD response (p<0.01, corrected) in the reversible clause conditions, more extensively in reversible-embedded than in reversible-conjoined clauses. Hypoglycemia resulted in a robust decrease in performance on reversible clauses and exerted attenuating effects on BOLD unselectively across cortical circuits. The dominance of reversibility in all measures underscores the distinction between the syntactic and semantic contrasts. The syntactic is based in a quantitative difference in algorithms interpreting embedded and conjoined structures. We suggest that the semantic is based in a qualitative difference between algorithmic mapping of arguments in reversible clauses and heuristic linking in irreversible clauses. Because heuristics drastically reduce resource demand, the operations they support would resist the load-dependent cognitive consequences of hypoglycemia. © 2011 Elsevier Inc. All rights reserved.

  12. Mixed mode control method and engine using same

    DOEpatents

    Kesse, Mary L [Peoria, IL; Duffy, Kevin P [Metamora, IL

    2007-04-10

    A method of mixed mode operation of an internal combustion engine includes the steps of controlling a homogeneous charge combustion event timing in a given engine cycle, and controlling a conventional charge injection event to be at least a predetermined time after the homogeneous charge combustion event. An internal combustion engine is provided, including an electronic controller having a computer readable medium with a combustion timing control algorithm recorded thereon, the control algorithm including means for controlling a homogeneous charge combustion event timing and means for controlling a conventional injection event timing to be at least a predetermined time from the homogeneous charge combustion event.

  13. Identification of informative features for predicting proinflammatory potentials of engine exhausts.

    PubMed

    Wang, Chia-Chi; Lin, Ying-Chi; Lin, Yuan-Chung; Jhang, Syu-Ruei; Tung, Chun-Wei

    2017-08-18

    The immunotoxicity of engine exhausts is of high concern to human health due to the increasing prevalence of immune-related diseases. However, the evaluation of immunotoxicity of engine exhausts is currently based on expensive and time-consuming experiments. It is desirable to develop efficient methods for immunotoxicity assessment. To accelerate the development of safe alternative fuels, this study proposed a computational method for identifying informative features for predicting proinflammatory potentials of engine exhausts. A principal component regression (PCR) algorithm was applied to develop prediction models. The informative features were identified by a sequential backward feature elimination (SBFE) algorithm. A total of 19 informative chemical and biological features were successfully identified by SBFE algorithm. The informative features were utilized to develop a computational method named FS-CBM for predicting proinflammatory potentials of engine exhausts. FS-CBM model achieved a high performance with correlation coefficient values of 0.997 and 0.943 obtained from training and independent test sets, respectively. The FS-CBM model was developed for predicting proinflammatory potentials of engine exhausts with a large improvement on prediction performance compared with our previous CBM model. The proposed method could be further applied to construct models for bioactivities of mixtures.

  14. Mitigation of reversible self-association and viscosity in a human IgG1 monoclonal antibody by rational, structure-guided Fv engineering

    PubMed Central

    Geoghegan, James C.; Fleming, Ryan; Damschroder, Melissa; Bishop, Steven M.; Sathish, Hasige A.; Esfandiary, Reza

    2016-01-01

    ABSTRACT Undesired solution behaviors such as reversible self-association (RSA), high viscosity, and liquid-liquid phase separation can introduce substantial challenges during development of monoclonal antibody formulations. Although a global mechanistic understanding of RSA (i.e., native and reversible protein-protein interactions) is sufficient to develop robust formulation controls, its mitigation via protein engineering requires knowledge of the sites of protein-protein interactions. In the study reported here, we coupled our previous hydrogen-deuterium exchange mass spectrometry findings with structural modeling and in vitro screening to identify the residues responsible for RSA of a model IgG1 monoclonal antibody (mAb-C), and rationally engineered variants with improved solution properties (i.e., reduced RSA and viscosity). Our data show that mutation of either solvent-exposed aromatic residues within the heavy and light chain variable regions or buried residues within the heavy chain/light chain interface can significantly mitigate RSA and viscosity by reducing the IgG's surface hydrophobicity. The engineering strategy described here highlights the utility of integrating complementary experimental and in silico methods to identify mutations that can improve developability, in particular, high concentration solution properties, of candidate therapeutic antibodies. PMID:27050875

  15. Methods and Algorithms for Computer-aided Engineering of Die Tooling of Compressor Blades from Titanium Alloy

    NASA Astrophysics Data System (ADS)

    Khaimovich, A. I.; Khaimovich, I. N.

    2018-01-01

    The articles provides the calculation algorithms for blank design and die forming fitting to produce the compressor blades for aircraft engines. The design system proposed in the article allows generating drafts of trimming and reducing dies automatically, leading to significant reduction of work preparation time. The detailed analysis of the blade structural elements features was carried out, the taken limitations and technological solutions allowed to form generalized algorithms of forming parting stamp face over the entire circuit of the engraving for different configurations of die forgings. The author worked out the algorithms and programs to calculate three dimensional point locations describing the configuration of die cavity.

  16. Unraveling gene regulatory networks from time-resolved gene expression data -- a measures comparison study

    PubMed Central

    2011-01-01

    Background Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications. Results Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study. Conclusions Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices. PMID:21771321

  17. Detecting a subsurface cylinder by a Time Reversal MUSIC like method

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Dell'Aversano, Angela; Leone, Giovanni

    2014-05-01

    In this contribution the problem of imaging a buried homogeneous circular cylinder is dealt with for a two-dimensional scalar geometry. Though the addressed geometry is extremely simple as compared to real world scenarios, it can be considered of interest for a classical GPR civil engineering applicative context: that is the subsurface prospecting of urban area in order to detect and locate buried utilities. A large body of methods for subsurface imaging have been presented in literature [1], ranging from migration algorithms to non-linear inverse scattering approaches. More recently, also spectral estimation methods, which benefit from sub-array data arrangement, have been proposed and compared in [2].Here a Time Reversal MUSIC (TRM) like method is employed. TRM has been initially conceived to detect point-like scatterers and then generalized to the case of extended scatterers [3]. In the latter case, no a priori information about the scatterers is exploited. However, utilities often can be schematized as circular cylinders. Here, we develop a TRM variant which use this information to properly tailor the steering vector while implementing TRM. Accordingly, instead of a spatial map [3], the imaging procedure returns the scatterer's parameters such as its center position, radius and dielectric permittivity. The study is developed by numerical simulations. First the free-space case is considered in order to more easily introduce the idea and the problem mathematical structure. Then the analysis is extended to the half-space case. In both situations a FDTD forward solver is used to generate the synthetic data. As usual in TRM, a multi-view/multi-static single-frequency configuration is considered and emphasis is put on the role played by the number of available sensors. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." [1] A. Randazzo and R. Solimene, 'Development Of New Methods For The Solution Of Inverse Electromagnetic Scattering Problems By Buried Structures: State of the Art and Open Issues ,'in COST ACTION TU1208: CIVIL ENGINEERING APPLICATIONS OF GROUND PENETRATING RADAR, Proceedings of first Action's General Meeting, 2013. ISBN: 978-88-548-6191-6. [2] S. Meschino, L. Pajewski, M. Pastorino, A. Randazzo, G. Schettini, "Detection of subsurface metallic utilities by means of a SAP technique: Comparing MUSIC- and SVM-based approaches, Journal of Applied Geophysics, vol. 97, pp. 60-68, 2013. [3] E. A. Marengo, F. K. Gruber, F. Simonetti, 'Time-reversal MUSIC imaging of extended targets,' IEEE Trans Image Process. vol. 16, pp. 1967-84, 2007

  18. Data Mining and Optimization Tools for Developing Engine Parameters Tools

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1998-01-01

    This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. Tricia Erhardt and I studied the problem domain for developing an Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy, datasets. From the study and discussion with NASA LeRC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of tile data for GA based multi-resolution optimal search.

  19. Tunable thermal conductivity via domain structure engineering in ferroelectric thin films: A phase-field simulation

    DOE PAGES

    Wang, Jian -Jun; Wang, Yi; Ihlefeld, Jon F.; ...

    2016-04-06

    Effective thermal conductivity as a function of domain structure is studied by solving the heat conduction equation using a spectral iterative perturbation algorithm in materials with inhomogeneous thermal conductivity distribution. Using this proposed algorithm, the experimentally measured effective thermal conductivities of domain-engineered {001} p-BiFeO 3 thin films are quantitatively reproduced. In conjunction with two other testing examples, this proposed algorithm is proven to be an efficient tool for interpreting the relationship between the effective thermal conductivity and micro-/domain-structures. By combining this algorithm with the phase-field model of ferroelectric thin films, the effective thermal conductivity for PbZr 1-xTi xO 3 filmsmore » under different composition, thickness, strain, and working conditions is predicted. It is shown that the chemical composition, misfit strain, film thickness, film orientation, and a Piezoresponse Force Microscopy tip can be used to engineer the domain structures and tune the effective thermal conductivity. Furthermore, we expect our findings will stimulate future theoretical, experimental and engineering efforts on developing devices based on the tunable effective thermal conductivity in ferroelectric nanostructures.« less

  20. Tunable thermal conductivity via domain structure engineering in ferroelectric thin films: A phase-field simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jian -Jun; Wang, Yi; Ihlefeld, Jon F.

    Effective thermal conductivity as a function of domain structure is studied by solving the heat conduction equation using a spectral iterative perturbation algorithm in materials with inhomogeneous thermal conductivity distribution. Using this proposed algorithm, the experimentally measured effective thermal conductivities of domain-engineered {001} p-BiFeO 3 thin films are quantitatively reproduced. In conjunction with two other testing examples, this proposed algorithm is proven to be an efficient tool for interpreting the relationship between the effective thermal conductivity and micro-/domain-structures. By combining this algorithm with the phase-field model of ferroelectric thin films, the effective thermal conductivity for PbZr 1-xTi xO 3 filmsmore » under different composition, thickness, strain, and working conditions is predicted. It is shown that the chemical composition, misfit strain, film thickness, film orientation, and a Piezoresponse Force Microscopy tip can be used to engineer the domain structures and tune the effective thermal conductivity. Furthermore, we expect our findings will stimulate future theoretical, experimental and engineering efforts on developing devices based on the tunable effective thermal conductivity in ferroelectric nanostructures.« less

  1. Zombie algorithms: a timesaving remote sensing systems engineering tool

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.; Powell, Dylan C.; Marley, Stephen

    2008-08-01

    In modern horror fiction, zombies are generally undead corpses brought back from the dead by supernatural or scientific means, and are rarely under anyone's direct control. They typically have very limited intelligence, and hunger for the flesh of the living [1]. Typical spectroradiometric or hyperspectral instruments providess calibrated radiances for a number of remote sensing algorithms. The algorithms typically must meet specified latency and availability requirements while yielding products at the required quality. These systems, whether research, operational, or a hybrid, are typically cost constrained. Complexity of the algorithms can be high, and may evolve and mature over time as sensor characterization changes, product validation occurs, and areas of scientific basis improvement are identified and completed. This suggests the need for a systems engineering process for algorithm maintenance that is agile, cost efficient, repeatable, and predictable. Experience on remote sensing science data systems suggests the benefits of "plug-n-play" concepts of operation. The concept, while intuitively simple, can be challenging to implement in practice. The use of zombie algorithms-empty shells that outwardly resemble the form, fit, and function of a "complete" algorithm without the implemented theoretical basis-provides the ground systems advantages equivalent to those obtained by integrating sensor engineering models onto the spacecraft bus. Combined with a mature, repeatable process for incorporating the theoretical basis, or scientific core, into the "head" of the zombie algorithm, along with associated scripting and registration, provides an easy "on ramp" for the rapid and low-risk integration of scientific applications into operational systems.

  2. Intelligent Life-Extending Controls for Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei; Chen, Philip; Jaw, Link

    2005-01-01

    Aircraft engine controllers are designed and operated to provide desired performance and stability margins. The purpose of life-extending-control (LEC) is to study the relationship between control action and engine component life usage, and to design an intelligent control algorithm to provide proper trade-offs between performance and engine life usage. The benefit of this approach is that it is expected to maintain safety while minimizing the overall operating costs. With the advances of computer technology, engine operation models, and damage physics, it is necessary to reevaluate the control strategy fro overall operating cost consideration. This paper uses the thermo-mechanical fatigue (TMF) of a critical component to demonstrate how an intelligent engine control algorithm can drastically reduce the engine life usage with minimum sacrifice in performance. A Monte Carlo simulation is also performed to evaluate the likely engine damage accumulation under various operating conditions. The simulation results show that an optimized acceleration schedule can provide a significant life saving in selected engine components.

  3. Sensor fault diagnosis of aero-engine based on divided flight status.

    PubMed

    Zhao, Zhen; Zhang, Jun; Sun, Yigang; Liu, Zhexu

    2017-11-01

    Fault diagnosis and safety analysis of an aero-engine have attracted more and more attention in modern society, whose safety directly affects the flight safety of an aircraft. In this paper, the problem concerning sensor fault diagnosis is investigated for an aero-engine during the whole flight process. Considering that the aero-engine is always working in different status through the whole flight process, a flight status division-based sensor fault diagnosis method is presented to improve fault diagnosis precision for the aero-engine. First, aero-engine status is partitioned according to normal sensor data during the whole flight process through the clustering algorithm. Based on that, a diagnosis model is built for each status using the principal component analysis algorithm. Finally, the sensors are monitored using the built diagnosis models by identifying the aero-engine status. The simulation result illustrates the effectiveness of the proposed method.

  4. Sensor fault diagnosis of aero-engine based on divided flight status

    NASA Astrophysics Data System (ADS)

    Zhao, Zhen; Zhang, Jun; Sun, Yigang; Liu, Zhexu

    2017-11-01

    Fault diagnosis and safety analysis of an aero-engine have attracted more and more attention in modern society, whose safety directly affects the flight safety of an aircraft. In this paper, the problem concerning sensor fault diagnosis is investigated for an aero-engine during the whole flight process. Considering that the aero-engine is always working in different status through the whole flight process, a flight status division-based sensor fault diagnosis method is presented to improve fault diagnosis precision for the aero-engine. First, aero-engine status is partitioned according to normal sensor data during the whole flight process through the clustering algorithm. Based on that, a diagnosis model is built for each status using the principal component analysis algorithm. Finally, the sensors are monitored using the built diagnosis models by identifying the aero-engine status. The simulation result illustrates the effectiveness of the proposed method.

  5. Project-Based Teaching-Learning Computer-Aided Engineering Tools

    ERIC Educational Resources Information Center

    Simoes, J. A.; Relvas, C.; Moreira, R.

    2004-01-01

    Computer-aided design, computer-aided manufacturing, computer-aided analysis, reverse engineering and rapid prototyping are tools that play an important key role within product design. These are areas of technical knowledge that must be part of engineering and industrial design courses' curricula. This paper describes our teaching experience of…

  6. Parallel Algorithms for Groebner-Basis Reduction

    DTIC Science & Technology

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  7. An algorithm for targeting finite burn maneuvers

    NASA Technical Reports Server (NTRS)

    Barbieri, R. W.; Wyatt, G. H.

    1972-01-01

    An algorithm was developed to solve the following problem: given the characteristics of the engine to be used to make a finite burn maneuver and given the desired orbit, when must the engine be ignited and what must be the orientation of the thrust vector so as to obtain the desired orbit? The desired orbit is characterized by classical elements and functions of these elements whereas the control parameters are characterized by the time to initiate the maneuver and three direction cosines which locate the thrust vector. The algorithm was built with a Monte Carlo capability whereby samples are taken from the distribution of errors associated with the estimate of the state and from the distribution of errors associated with the engine to be used to make the maneuver.

  8. More than Just Hot Air: How Hairdryers and Role Models Inspire Girls in Engineering

    ERIC Educational Resources Information Center

    Kekelis, Linda; Larkin, Molly; Gomes, Lyn

    2014-01-01

    This article describes a reverse-engineering project where female students take a part a hair dryer--giving them an opportunity to see the many different kinds of engineering disciplines involved in making a hairdryer and that they work together. Mechanical Engineer, Lyn Gome, describes her experience leading a group of middle school girls through…

  9. Profile of an Effective Engineering Manager at the Naval Avionics Center

    DTIC Science & Technology

    1991-06-01

    GROUP Leadership ; Engineering Management Effectiveness; Engineers; Engineering Managers ; Naval Avionics Center 19 ABSTR. T (Continue on reverse if...Personnel. The purpose of the Institute is to support the implementation of the NAC Leadership / Management Principles throughout NAC. The Leadership ... Management Principles are as follows: - Develc 2 and Maintain a Corporate Outlook. - Communicate the Organizational Vision through Positive Leadership

  10. Multi-source Geospatial Data Analysis with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Erickson, T.

    2014-12-01

    The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org

  11. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    PubMed

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-04

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.

  12. Analysis of the flow field generated near an aircraft engine operating in reverse thrust. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ledwith, W. A., Jr.

    1972-01-01

    A computer solution is developed to the exhaust gas reingestion problem for aircraft operating in the reverse thrust mode on a crosswind-free runway. The computer program determines the location of the inlet flow pattern, whether the exhaust efflux lies within the inlet flow pattern or not, and if so, the approximate time before the reversed flow reaches the engine inlet. The program is written so that the user is free to select discrete runway speeds or to study the entire aircraft deceleration process for both the far field and cross-ingestion problems. While developed with STOL applications in mind, the solution is equally applicable to conventional designs. The inlet and reversed jet flow fields involved in the problem are assumed to be noninteracting. The nacelle model used in determining the inlet flow field is generated using an iterative solution to the Neuman problem from potential flow theory while the reversed jet flow field is adapted using an empirical correlation from the literature. Sample results obtained using the program are included.

  13. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    NASA Astrophysics Data System (ADS)

    Ghezavati, V. R.; Beigi, M.

    2016-12-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  14. Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Gorelick, Noel

    2013-04-01

    The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.

  15. Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Gorelick, N.

    2012-12-01

    The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.

  16. Autonomous Propulsion System Technology Being Developed to Optimize Engine Performance Throughout the Lifecycle

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2004-01-01

    The goal of the Autonomous Propulsion System Technology (APST) project is to reduce pilot workload under both normal and anomalous conditions. Ongoing work under APST develops and leverages technologies that provide autonomous engine monitoring, diagnosing, and controller adaptation functions, resulting in an integrated suite of algorithms that maintain the propulsion system's performance and safety throughout its life. Engine-to-engine performance variation occurs among new engines because of manufacturing tolerances and assembly practices. As an engine wears, the performance changes as operability limits are reached. In addition to these normal phenomena, other unanticipated events such as sensor failures, bird ingestion, or component faults may occur, affecting pilot workload as well as compromising safety. APST will adapt the controller as necessary to achieve optimal performance for a normal aging engine, and the safety net of APST algorithms will examine and interpret data from a variety of onboard sources to detect, isolate, and if possible, accommodate faults. Situations that cannot be accommodated within the faulted engine itself will be referred to a higher level vehicle management system. This system will have the authority to redistribute the faulted engine's functionality among other engines, or to replan the mission based on this new engine health information. Work is currently underway in the areas of adaptive control to compensate for engine degradation due to aging, data fusion for diagnostics and prognostics of specific sensor and component faults, and foreign object ingestion detection. In addition, a framework is being defined for integrating all the components of APST into a unified system. A multivariable, adaptive, multimode control algorithm has been developed that accommodates degradation-induced thrust disturbances during throttle transients. The baseline controller of the engine model currently being investigated has multiple control modes that are selected according to some performance or operational criteria. As the engine degrades, parameters shift from their nominal values. Thus, when a new control mode is swapped in, a variable that is being brought under control might have an excessive initial error. The new adaptive algorithm adjusts the controller gains on the basis of the level of degradation to minimize the disruptive influence of the large error on other variables and to recover the desired thrust response.

  17. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium.

    PubMed

    Kapfer, Sebastian C; Krauth, Werner

    2017-12-15

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  18. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    NASA Astrophysics Data System (ADS)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  19. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  20. Algorithm design, user interface, and optimization procedure for a fuzzy logic ramp metering algorithm : a training manual for freeway operations engineers

    DOT National Transportation Integrated Search

    2000-02-01

    This training manual describes the fuzzy logic ramp metering algorithm in detail, as implemented system-wide in the greater Seattle area. The method of defining the inputs to the controller and optimizing the performance of the algorithm is explained...

  1. An Algorithm for Interactive Modeling of Space-Transportation Engine Simulations: A Constraint Satisfaction Approach

    NASA Technical Reports Server (NTRS)

    Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara

    2001-01-01

    In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.

  2. Time reversal and phase coherent music techniques for super-resolution ultrasound imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Labyed, Yassin

    Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements. A modified TR-MUSIC imaging algorithm is used to account for ultrasound scattering from both density and compressibility contrasts. The phase response of ultrasound transducer elements is accounted for in a PC-MUSIC system.

  3. Combustion Light Gas Gun Technology Demonstration

    DTIC Science & Technology

    2007-01-23

    J. G. Handbook of Cryogenic Engineering. Philadelphia: Taylor and Francis, 1998. ISBN 1-56032-332-9 Myth #2 from “Twenty Hydrogen Myths” by...the second using Helium-refrigerated reverse Brayton cycle manufactured by Linde. Neither system was designed specifically for naval applications...8 Since floor space is of a premium, the helium refrigerated reverse Brayton cycle is the system of primary current interest. The reverse Brayton

  4. Reverse Engineering of Genome-wide Gene Regulatory Networks from Gene Expression Data

    PubMed Central

    Liu, Zhi-Ping

    2015-01-01

    Transcriptional regulation plays vital roles in many fundamental biological processes. Reverse engineering of genome-wide regulatory networks from high-throughput transcriptomic data provides a promising way to characterize the global scenario of regulatory relationships between regulators and their targets. In this review, we summarize and categorize the main frameworks and methods currently available for inferring transcriptional regulatory networks from microarray gene expression profiling data. We overview each of strategies and introduce representative methods respectively. Their assumptions, advantages, shortcomings, and possible improvements and extensions are also clarified and commented. PMID:25937810

  5. Effects of an in-flight thrust reverser on the stability and control characteristics of a single-engine fighter airplane model

    NASA Technical Reports Server (NTRS)

    Mercer, C. E.; Maiden, D. L.

    1972-01-01

    The changes in thrust minus drag performance as well as longitudinal and directional stability and control characteristics of a single-engine jet aircraft attributable to an in-flight thrust reverser of the blocker-deflector door type were investigated in a 16-foot transonic wind tunnel. The longitudinal and directional stability data are presented. Test conditions simulated landing approach conditions as well as high speed maneuvering such as may be required for combat or steep descent from high altitude.

  6. Development of high temperature liquid lubricants for low-heat rejection: Heavy duty diesel engines

    NASA Technical Reports Server (NTRS)

    Wiczynski, P. D.; Marolewski, T. A.

    1993-01-01

    The objective of this DOE program was to develop a liquid lubricant that will allow advanced diesel engines to operate at top ring reversal temperatures approaching 500 C and sump temperatures approaching 250 C. The lubricants developed demonstrated at marginal increase in sump temperature capability, approximately 15 C, and an increase in top ring reversal temperature. A 15W-40 synthetic lubricant designated HTL-4 was the best lubricant developed in terms of stability, wear control, deposit control dispersancy, and particulate emissions.

  7. A fuzzy-match search engine for physician directories.

    PubMed

    Rastegar-Mojarad, Majid; Kadolph, Christopher; Ye, Zhan; Wall, Daniel; Murali, Narayana; Lin, Simon

    2014-11-04

    A search engine to find physicians' information is a basic but crucial function of a health care provider's website. Inefficient search engines, which return no results or incorrect results, can lead to patient frustration and potential customer loss. A search engine that can handle misspellings and spelling variations of names is needed, as the United States (US) has culturally, racially, and ethnically diverse names. The Marshfield Clinic website provides a search engine for users to search for physicians' names. The current search engine provides an auto-completion function, but it requires an exact match. We observed that 26% of all searches yielded no results. The goal was to design a fuzzy-match algorithm to aid users in finding physicians easier and faster. Instead of an exact match search, we used a fuzzy algorithm to find similar matches for searched terms. In the algorithm, we solved three types of search engine failures: "Typographic", "Phonetic spelling variation", and "Nickname". To solve these mismatches, we used a customized Levenshtein distance calculation that incorporated Soundex coding and a lookup table of nicknames derived from US census data. Using the "Challenge Data Set of Marshfield Physician Names," we evaluated the accuracy of fuzzy-match engine-top ten (90%) and compared it with exact match (0%), Soundex (24%), Levenshtein distance (59%), and fuzzy-match engine-top one (71%). We designed, created a reference implementation, and evaluated a fuzzy-match search engine for physician directories. The open-source code is available at the codeplex website and a reference implementation is available for demonstration at the datamarsh website.

  8. Solving TSP problem with improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying

    2018-05-01

    The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.

  9. [Development of computer aided forming techniques in manufacturing scaffolds for bone tissue engineering].

    PubMed

    Wei, Xuelei; Dong, Fuhui

    2011-12-01

    To review recent advance in the research and application of computer aided forming techniques for constructing bone tissue engineering scaffolds. The literature concerning computer aided forming techniques for constructing bone tissue engineering scaffolds in recent years was reviewed extensively and summarized. Several studies over last decade have focused on computer aided forming techniques for bone scaffold construction using various scaffold materials, which is based on computer aided design (CAD) and bone scaffold rapid prototyping (RP). CAD include medical CAD, STL, and reverse design. Reverse design can fully simulate normal bone tissue and could be very useful for the CAD. RP techniques include fused deposition modeling, three dimensional printing, selected laser sintering, three dimensional bioplotting, and low-temperature deposition manufacturing. These techniques provide a new way to construct bone tissue engineering scaffolds with complex internal structures. With rapid development of molding and forming techniques, computer aided forming techniques are expected to provide ideal bone tissue engineering scaffolds.

  10. An algebra-based method for inferring gene regulatory networks.

    PubMed

    Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard

    2014-03-26

    The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the dynamic patterns present in the network. Boolean polynomial dynamical systems provide a powerful modeling framework for the reverse engineering of gene regulatory networks, that enables a rich mathematical structure on the model search space. A C++ implementation of the method, distributed under LPGL license, is available, together with the source code, at http://www.paola-vera-licona.net/Software/EARevEng/REACT.html.

  11. Correction of geometric distortion in Propeller echo planar imaging using a modified reversed gradient approach.

    PubMed

    Chang, Hing-Chiu; Chuang, Tzu-Chao; Lin, Yi-Ru; Wang, Fu-Nien; Huang, Teng-Yi; Chung, Hsiao-Wen

    2013-04-01

    This study investigates the application of a modified reversed gradient algorithm to the Propeller-EPI imaging method (periodically rotated overlapping parallel lines with enhanced reconstruction based on echo-planar imaging readout) for corrections of geometric distortions due to the EPI readout. Propeller-EPI acquisition was executed with 360-degree rotational coverage of the k-space, from which the image pairs with opposite phase-encoding gradient polarities were extracted for reversed gradient geometric and intensity corrections. The spatial displacements obtained on a pixel-by-pixel basis were fitted using a two-dimensional polynomial followed by low-pass filtering to assure correction reliability in low-signal regions. Single-shot EPI images were obtained on a phantom, whereas high spatial resolution T2-weighted and diffusion tensor Propeller-EPI data were acquired in vivo from healthy subjects at 3.0 Tesla, to demonstrate the effectiveness of the proposed algorithm. Phantom images show success of the smoothed displacement map concept in providing improvements of the geometric corrections at low-signal regions. Human brain images demonstrate prominently superior reconstruction quality of Propeller-EPI images with modified reversed gradient corrections as compared with those obtained without corrections, as evidenced from verification against the distortion-free fast spin-echo images at the same level. The modified reversed gradient method is an effective approach to obtain high-resolution Propeller-EPI images with substantially reduced artifacts.

  12. Software Management Environment (SME): Components and algorithms

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1994-01-01

    This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'

  13. Real-time failure control (SAFD)

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.

    1990-01-01

    The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.

  14. Software engineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III; Hiott, Jim; Golej, Jim; Plumb, Allan

    1993-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. The Johnson Space Center (JSC) created a significant set of tools to develop and maintain FORTRAN and C code during development of the space shuttle. This tool set forms the basis for an integrated environment to reengineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. The latest release of the environment was in Feb. 1992.

  15. Image Registration for Stability Testing of MEMS

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; LeMoigne, Jacqueline; Blake, Peter N.; Morey, Peter A.; Landsman, Wayne B.; Chambers, Victor J.; Moseley, Samuel H.

    2011-01-01

    Image registration, or alignment of two or more images covering the same scenes or objects, is of great interest in many disciplines such as remote sensing, medical imaging. astronomy, and computer vision. In this paper, we introduce a new application of image registration algorithms. We demonstrate how through a wavelet based image registration algorithm, engineers can evaluate stability of Micro-Electro-Mechanical Systems (MEMS). In particular, we applied image registration algorithms to assess alignment stability of the MicroShutters Subsystem (MSS) of the Near Infrared Spectrograph (NIRSpec) instrument of the James Webb Space Telescope (JWST). This work introduces a new methodology for evaluating stability of MEMS devices to engineers as well as a new application of image registration algorithms to computer scientists.

  16. Adaptable Hydrogel Networks with Reversible Linkages for Tissue Engineering

    PubMed Central

    Wang, Huiyuan

    2015-01-01

    Adaptable hydrogels have recently emerged as a promising platform for three-dimensional (3D) cell encapsulation and culture. In conventional, covalently crosslinked hydrogels, degradation is typically required to allow complex cellular functions to occur, leading to bulk material degradation. In contrast, adaptable hydrogels are formed by reversible crosslinks. Through breaking and re-forming of the reversible linkages, adaptable hydrogels can be locally modified to permit complex cellular functions while maintaining their long-term integrity. In addition, these adaptable materials can have biomimetic viscoelastic properties that make them well suited for several biotechnology and medical applications. In this review, adaptable hydrogel design considerations and linkage selections are overviewed, with a focus on various cell compatible crosslinking mechanisms that can be exploited to form adaptable hydrogels for tissue engineering. PMID:25989348

  17. Completing the Physical Representation of Quantum Algorithms Provides a Quantitative Explanation of Their Computational Speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2018-03-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.

  18. Target-oriented imaging of hydraulic fractures by applying the staining algorithm for downhole microseismic migration

    NASA Astrophysics Data System (ADS)

    Lin, Ye; Zhang, Haijiang; Jia, Xiaofeng

    2018-03-01

    For microseismic monitoring of hydraulic fracturing, microseismic migration can be used to image the fracture network with scattered microseismic waves. Compared with conventional microseismic location-based fracture characterization methods, microseismic migration can better constrain the stimulated reservoir volume regardless of the completeness of detected and located microseismic sources. However, the imaging results from microseismic migration may suffer from the contamination of other structures and thus the target fracture zones may not be illuminated properly. To solve this issue, in this study we propose a target-oriented staining algorithm for microseismic reverse-time migration. In the staining algorithm, the target area is first stained by constructing an imaginary velocity field and then a synchronized source wavefield only concerning the target structure is produced. As a result, a synchronized image from imaging with the synchronized source wavefield mainly contains the target structures. Synthetic tests based on a downhole microseismic monitoring system show that the target-oriented microseismic reverse-time migration method improves the illumination of target areas.

  19. Information Clustering Based on Fuzzy Multisets.

    ERIC Educational Resources Information Center

    Miyamoto, Sadaaki

    2003-01-01

    Proposes a fuzzy multiset model for information clustering with application to information retrieval on the World Wide Web. Highlights include search engines; term clustering; document clustering; algorithms for calculating cluster centers; theoretical properties concerning clustering algorithms; and examples to show how the algorithms work.…

  20. Encryption and decryption using FPGA

    NASA Astrophysics Data System (ADS)

    Nayak, Nikhilesh; Chandak, Akshay; Shah, Nisarg; Karthikeyan, B.

    2017-11-01

    In this paper, we are performing multiple cryptography methods on a set of data and comparing their outputs. Here AES algorithm and RSA algorithm are used. Using AES Algorithm an 8 bit input (plain text) gets encrypted using a cipher key and the result is displayed on tera term (serially). For simulation a 128 bit input is used and operated with a 128 bit cipher key to generate encrypted text. The reverse operations are then performed to get decrypted text. In RSA Algorithm file handling is used to input plain text. This text is then operated on to get the encrypted and decrypted data, which are then stored in a file. Finally the results of both the algorithms are compared.

  1. In-line stirling energy system

    DOEpatents

    Backhaus, Scott N [Espanola, NM; Keolian, Robert [State College, PA

    2011-03-22

    A high efficiency generator is provided using a Stirling engine to amplify an acoustic wave by heating the gas in the engine in a forward mode. The engine is coupled to an alternator to convert heat input to the engine into electricity. A plurality of the engines and respective alternators can be coupled to operate in a timed sequence to produce multi-phase electricity without the need for conversion. The engine system may be operated in a reverse mode as a refrigerator/heat pump.

  2. Digital Image Analysis Algorithm For Determination of Particle Size Distributions In Diesel Engines

    NASA Astrophysics Data System (ADS)

    Armas, O.; Ballesteros, R.; Gomez, A.

    One of the most serious problems associated to Diesel engines is pollutant emissions, standing out nitrogen oxides and particulate matter. However, although current emis- sions standards in Europe and America with regard to light vehicles and heavy duty engines refer the particulate limit in mass units, concern for knowing size and number of particles emitted by engines is being increased recently. This interest is promoted by last studies about particle harmful effects on health and is enhanced by recent changes in internal combustion engines technology. This study is focused on the implementation of a method to determine the particle size distribution made up in current methodology for vehicles certification in Europe. It will use an automated Digital Image Analysis Algorithm (DIAA) to determine particle size trends from Scanning Electron Microscope (SEM) images of filters charged in a dilution system used for measuring specific particulate emissions. The experimental work was performed on a steady state direct injection Diesel en- gine with 0.5 MW rated power, being considered as a typical engine in middle power industries. Particulate size distributions obtained using DIAA and a Scanning Mobil- ity Particle Sizer (SMPS), nowadays considered as the most reliable technique, were compared. Although number concentration detected by this method does not repre- sent real flowing particle concentration, this algorithm fairly reproduces the trends observed with SMPS when the engine load is varied.

  3. Using virtual environment for autonomous vehicle algorithm validation

    NASA Astrophysics Data System (ADS)

    Levinskis, Aleksandrs

    2018-04-01

    This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.

  4. Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2017-01-01

    The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.

  5. A proposed Kalman filter algorithm for estimation of unmeasured output variables for an F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Alag, Gurbux S.; Gilyard, Glenn B.

    1990-01-01

    To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.

  6. Stimulation of a turbofan engine for evaluation of multivariable optimal control concepts. [(computerized simulation)

    NASA Technical Reports Server (NTRS)

    Seldner, K.

    1976-01-01

    The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.

  7. A memory-efficient staining algorithm in 3D seismic modelling and imaging

    NASA Astrophysics Data System (ADS)

    Jia, Xiaofeng; Yang, Lu

    2017-08-01

    The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.

  8. Data Compression for Maskless Lithography Systems: Architecture, Algorithms and Implementation

    DTIC Science & Technology

    2008-05-19

    Data Compression for Maskless Lithography Systems: Architecture, Algorithms and Implementation Vito Dai Electrical Engineering and Computer Sciences...servers or to redistribute to lists, requires prior specific permission. Data Compression for Maskless Lithography Systems: Architecture, Algorithms and...for Maskless Lithography Systems: Architecture, Algorithms and Implementation Copyright 2008 by Vito Dai 1 Abstract Data Compression for Maskless

  9. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. JSC created a significant set of tools to develop and maintain FORTRAN and C code during development of the Space Shuttle. This tool set forms the basis for an integrated environment to re-engineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta vision of the environment was released in Mar. 1991. The commercial potential for such re-engineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives.

  10. Critical evaluation of reverse engineering tool Imagix 4D!

    PubMed

    Yadav, Rashmi; Patel, Ravindra; Kothari, Abhay

    2016-01-01

    The comprehension of legacy codes is difficult to understand. Various commercial reengineering tools are available that have unique working styles, and are equipped with their inherent capabilities and shortcomings. The focus of the available tools is in visualizing static behavior not the dynamic one. Therefore, it is difficult for people who work in software product maintenance, code understanding reengineering/reverse engineering. Consequently, the need for a comprehensive reengineering/reverse engineering tool arises. We found the usage of Imagix 4D to be good as it generates the maximum pictorial representations in the form of flow charts, flow graphs, class diagrams, metrics and, to a partial extent, dynamic visualizations. We evaluated Imagix 4D with the help of a case study involving a few samples of source code. The behavior of the tool was analyzed on multiple small codes and a large code gcc C parser. Large code evaluation was performed to uncover dead code, unstructured code, and the effect of not including required files at preprocessing level. The utility of Imagix 4D to prepare decision density and complexity metrics for a large code was found to be useful in getting to know how much reengineering is required. At the outset, Imagix 4D offered limitations in dynamic visualizations, flow chart separation (large code) and parsing loops. The outcome of evaluation will eventually help in upgrading Imagix 4D and posed a need of full featured tools in the area of software reengineering/reverse engineering. It will also help the research community, especially those who are interested in the realm of software reengineering tool building.

  11. Computing Cooling Flows in Turbines

    NASA Technical Reports Server (NTRS)

    Gauntner, J.

    1986-01-01

    Algorithm developed for calculating both quantity of compressor bleed flow required to cool turbine and resulting decrease in efficiency due to cooling air injected into gas stream. Program intended for use with axial-flow, air-breathing, jet-propulsion engines with variety of airfoil-cooling configurations. Algorithm results compared extremely well with figures given by major engine manufacturers for given bulk-metal temperatures and cooling configurations. Program written in FORTRAN IV for batch execution.

  12. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    PubMed Central

    Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen

    2018-01-01

    The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635

  13. Diagnosis of Chronic Kidney Disease Based on Support Vector Machine by Feature Selection Methods.

    PubMed

    Polat, Huseyin; Danaei Mehr, Homay; Cetin, Aydin

    2017-04-01

    As Chronic Kidney Disease progresses slowly, early detection and effective treatment are the only cure to reduce the mortality rate. Machine learning techniques are gaining significance in medical diagnosis because of their classification ability with high accuracy rates. The accuracy of classification algorithms depend on the use of correct feature selection algorithms to reduce the dimension of datasets. In this study, Support Vector Machine classification algorithm was used to diagnose Chronic Kidney Disease. To diagnose the Chronic Kidney Disease, two essential types of feature selection methods namely, wrapper and filter approaches were chosen to reduce the dimension of Chronic Kidney Disease dataset. In wrapper approach, classifier subset evaluator with greedy stepwise search engine and wrapper subset evaluator with the Best First search engine were used. In filter approach, correlation feature selection subset evaluator with greedy stepwise search engine and filtered subset evaluator with the Best First search engine were used. The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate (98.5%) in the diagnosis of Chronic Kidney Disease compared to other selected methods.

  14. Detecting Solenoid Valve Deterioration in In-Use Electronic Diesel Fuel Injection Control Systems

    PubMed Central

    Tsai, Hsun-Heng; Tseng, Chyuan-Yow

    2010-01-01

    The diesel engine is the main power source for most agricultural vehicles. The control of diesel engine emissions is an important global issue. Fuel injection control systems directly affect fuel efficiency and emissions of diesel engines. Deterioration faults, such as rack deformation, solenoid valve failure, and rack-travel sensor malfunction, are possibly in the fuel injection module of electronic diesel control (EDC) systems. Among these faults, solenoid valve failure is most likely to occur for in-use diesel engines. According to the previous studies, this failure is a result of the wear of the plunger and sleeve, based on a long period of usage, lubricant degradation, or engine overheating. Due to the difficulty in identifying solenoid valve deterioration, this study focuses on developing a sensor identification algorithm that can clearly classify the usability of the solenoid valve, without disassembling the fuel pump of an EDC system for in-use agricultural vehicles. A diagnostic algorithm is proposed, including a feedback controller, a parameter identifier, a linear variable differential transformer (LVDT) sensor, and a neural network classifier. Experimental results show that the proposed algorithm can accurately identify the usability of solenoid valves. PMID:22163597

  15. Detecting solenoid valve deterioration in in-use electronic diesel fuel injection control systems.

    PubMed

    Tsai, Hsun-Heng; Tseng, Chyuan-Yow

    2010-01-01

    The diesel engine is the main power source for most agricultural vehicles. The control of diesel engine emissions is an important global issue. Fuel injection control systems directly affect fuel efficiency and emissions of diesel engines. Deterioration faults, such as rack deformation, solenoid valve failure, and rack-travel sensor malfunction, are possibly in the fuel injection module of electronic diesel control (EDC) systems. Among these faults, solenoid valve failure is most likely to occur for in-use diesel engines. According to the previous studies, this failure is a result of the wear of the plunger and sleeve, based on a long period of usage, lubricant degradation, or engine overheating. Due to the difficulty in identifying solenoid valve deterioration, this study focuses on developing a sensor identification algorithm that can clearly classify the usability of the solenoid valve, without disassembling the fuel pump of an EDC system for in-use agricultural vehicles. A diagnostic algorithm is proposed, including a feedback controller, a parameter identifier, a linear variable differential transformer (LVDT) sensor, and a neural network classifier. Experimental results show that the proposed algorithm can accurately identify the usability of solenoid valves.

  16. Sample-based engine noise synthesis using an enhanced pitch-synchronous overlap-and-add method.

    PubMed

    Jagla, Jan; Maillard, Julien; Martin, Nadine

    2012-11-01

    An algorithm for the real time synthesis of internal combustion engine noise is presented. Through the analysis of a recorded engine noise signal of continuously varying engine speed, a dataset of sound samples is extracted allowing the real time synthesis of the noise induced by arbitrary evolutions of engine speed. The sound samples are extracted from a recording spanning the entire engine speed range. Each sample is delimitated such as to contain the sound emitted during one cycle of the engine plus the necessary overlap to ensure smooth transitions during the synthesis. The proposed approach, an extension of the PSOLA method introduced for speech processing, takes advantage of the specific periodicity of engine noise signals to locate the extraction instants of the sound samples. During the synthesis stage, the sound samples corresponding to the target engine speed evolution are concatenated with an overlap and add algorithm. It is shown that this method produces high quality audio restitution with a low computational load. It is therefore well suited for real time applications.

  17. Reverse innovation in maternal health.

    PubMed

    Firoz, Tabassum; Makanga, Prestige Tatenda; Nathan, Hannah L; Payne, Beth; Magee, Laura A

    2017-09-01

    Reverse innovation, defined as the flow of ideas from low- to high-income settings, is gaining traction in healthcare. With an increasing focus on value, investing in low-cost but effective and innovative solutions can be of mutual benefit to both high- and low-income countries. Reverse innovation has a role in addressing maternal health challenges in high-income countries by harnessing these innovative solutions for vulnerable populations especially in rural and remote regions. In this paper, we present three examples of 'reverse innovation' for maternal health: a low-cost, easy-to-use blood pressure device (CRADLE), a diagnostic algorithm (mini PIERS) and accompanying mobile app (PIERS on the Move), and a novel method for mapping maternal outcomes (MOM).

  18. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    PubMed

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.

  19. Deciphering the Minimal Algorithm for Development and Information-genesis

    NASA Astrophysics Data System (ADS)

    Li, Zhiyuan; Tang, Chao; Li, Hao

    During development, cells with identical genomes acquires different fates in a highly organized manner. In order to decipher the principles underlining development, we used C.elegans as the model organism. Based on a large set of microscopy imaging, we first constructed a ``standard worm'' in silico: from the single zygotic cell to about 500 cell stage, the lineage, position, cell-cell contact and gene expression dynamics are quantified for each cell in order to investigate principles underlining these intensive data. Next, we reverse-engineered the possible gene-gene/cell-cell interaction rules that are capable of running a dynamic model recapitulating the early fate decisions during C.elegans development. we further formulized the C.elegans embryogenesis in the language of information genesis. Analysis towards data and model uncovered the global landscape of development in the cell fate space, suggested possible gene regulatory architectures and cell signaling processes, revealed diversity and robustness as the essential trade-offs in development, and demonstrated general strategies in building multicellular organisms.

  20. A flexible new method for 3D measurement based on multi-view image sequences

    NASA Astrophysics Data System (ADS)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  1. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  2. A shape-based inter-layer contours correspondence method for ICT-based reverse engineering

    PubMed Central

    Duan, Liming; Yang, Shangpeng; Zhang, Gui; Feng, Fei; Gu, Minghui

    2017-01-01

    The correspondence of a stack of planar contours in ICT (industrial computed tomography)-based reverse engineering, a key step in surface reconstruction, is difficult when the contours or topology of the object are complex. Given the regularity of industrial parts and similarity of the inter-layer contours, a specialized shape-based inter-layer contours correspondence method for ICT-based reverse engineering was presented to solve the above problem based on the vectorized contours. In this paper, the vectorized contours extracted from the slices consist of three graphical primitives: circles, arcs and segments. First, the correspondence of the inter-layer primitives is conducted based on the characteristics of the primitives. Second, based on the corresponded primitives, the inter-layer contours correspond with each other using the proximity rules and exhaustive search. The proposed method can make full use of the shape information to handle industrial parts with complex structures. The feasibility and superiority of this method have been demonstrated via the related experiments. This method can play an instructive role in practice and provide a reference for the related research. PMID:28489867

  3. A shape-based inter-layer contours correspondence method for ICT-based reverse engineering.

    PubMed

    Duan, Liming; Yang, Shangpeng; Zhang, Gui; Feng, Fei; Gu, Minghui

    2017-01-01

    The correspondence of a stack of planar contours in ICT (industrial computed tomography)-based reverse engineering, a key step in surface reconstruction, is difficult when the contours or topology of the object are complex. Given the regularity of industrial parts and similarity of the inter-layer contours, a specialized shape-based inter-layer contours correspondence method for ICT-based reverse engineering was presented to solve the above problem based on the vectorized contours. In this paper, the vectorized contours extracted from the slices consist of three graphical primitives: circles, arcs and segments. First, the correspondence of the inter-layer primitives is conducted based on the characteristics of the primitives. Second, based on the corresponded primitives, the inter-layer contours correspond with each other using the proximity rules and exhaustive search. The proposed method can make full use of the shape information to handle industrial parts with complex structures. The feasibility and superiority of this method have been demonstrated via the related experiments. This method can play an instructive role in practice and provide a reference for the related research.

  4. A Solution Method of Job-shop Scheduling Problems by the Idle Time Shortening Type Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ida, Kenichi; Osawa, Akira

    In this paper, we propose a new idle time shortening method for Job-shop scheduling problems (JSPs). We insert its method into a genetic algorithm (GA). The purpose of JSP is to find a schedule with the minimum makespan. We suppose that it is effective to reduce idle time of a machine in order to improve the makespan. The left shift is a famous algorithm in existing algorithms for shortening idle time. The left shift can not arrange the work to idle time. For that reason, some idle times are not shortened by the left shift. We propose two kinds of algorithms which shorten such idle time. Next, we combine these algorithms and the reversal of a schedule. We apply GA with its algorithm to benchmark problems and we show its effectiveness.

  5. In-depth analysis of protein inference algorithms using multiple search engines and well-defined metrics.

    PubMed

    Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset

    2017-01-06

    In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein inference is a crucial step in proteomics data analysis, a comprehensive evaluation of the many different inference methods has never been performed. Previously Journal of proteomics has published multiple studies about other benchmark of bioinformatics algorithms (PMID: 26585461; PMID: 22728601) in proteomics studies making clear the importance of those studies for the proteomics community and the journal audience. This manuscript presents a new bioinformatics solution based on the KNIME/OpenMS platform that aims at providing a fair comparison of protein inference algorithms (https://github.com/KNIME-OMICS). Six different algorithms - ProteinProphet, MSBayesPro, ProteinLP, Fido and PIA- were evaluated using the highly customizable workflow on four public datasets with varying complexities. Five popular database search engines Mascot, X!Tandem, MS-GF+ and combinations thereof were evaluated for every protein inference tool. In total >186 proteins lists were analyzed and carefully compare using three metrics for quality assessments of the protein inference results: 1) the numbers of reported proteins, 2) peptides per protein, and the 3) number of uniquely reported proteins per inference method, to address the quality of each inference method. We also examined how many proteins were reported by choosing each combination of search engines, protein inference algorithms and parameters on each dataset. The results show that using 1) PIA or Fido seems to be a good choice when studying the results of the analyzed workflow, regarding not only the reported proteins and the high-quality identifications, but also the required runtime. 2) Merging the identifications of multiple search engines gives almost always more confident results and increases the number of peptides per protein group. 3) The usage of databases containing not only the canonical, but also known isoforms of proteins has a small impact on the number of reported proteins. The detection of specific isoforms could, concerning the question behind the study, compensate for slightly shorter reports using the parsimonious reports. 4) The current workflow can be easily extended to support new algorithms and search engine combinations. Copyright © 2016. Published by Elsevier B.V.

  6. A Multistrategy Optimization Improved Artificial Bee Colony Algorithm

    PubMed Central

    Liu, Wen

    2014-01-01

    Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924

  7. Development and applications of various optimization algorithms for diesel engine combustion and emissions optimization

    NASA Astrophysics Data System (ADS)

    Ogren, Ryan M.

    For this work, Hybrid PSO-GA and Artificial Bee Colony Optimization (ABC) algorithms are applied to the optimization of experimental diesel engine performance, to meet Environmental Protection Agency, off-road, diesel engine standards. This work is the first to apply ABC optimization to experimental engine testing. All trials were conducted at partial load on a four-cylinder, turbocharged, John Deere engine using neat-Biodiesel for PSO-GA and regular pump diesel for ABC. Key variables were altered throughout the experiments, including, fuel pressure, intake gas temperature, exhaust gas recirculation flow, fuel injection quantity for two injections, pilot injection timing and main injection timing. Both forms of optimization proved effective for optimizing engine operation. The PSO-GA hybrid was able to find a superior solution to that of ABC within fewer engine runs. Both solutions call for high exhaust gas recirculation to reduce oxide of nitrogen (NOx) emissions while also moving pilot and main fuel injections to near top dead center for improved tradeoffs between NOx and particulate matter.

  8. Cascade Optimization for Aircraft Engines With Regression and Neural Network Analysis - Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    2000-01-01

    The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.

  9. Optimal Golomb Ruler Sequences Generation for Optical WDM Systems: A Novel Parallel Hybrid Multi-objective Bat Algorithm

    NASA Astrophysics Data System (ADS)

    Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena

    2017-02-01

    In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.

  10. Where Are All the Women Engineers? An Insider's View of Socialization and Power in Engineering Education

    ERIC Educational Resources Information Center

    Christman, Jeanne

    2017-01-01

    Despite more than thirty years of the underrepresentation of women in engineering being a persistent concern, research on the cause of the problem has not been successful in reversing the trend. A plethora of theories as to why females are not entering engineering exist, yet they only address issues on the surface and do not attend to a…

  11. Natural tooth intrusion and reversal in implant-assisted prosthesis: evidence of and a hypothesis for the occurrence.

    PubMed

    Sheets, C G; Earthmann, J C

    1993-12-01

    Based on clinical observation, a hypothesis of the mechanism of intrusion of natural teeth in an implant-assisted prosthesis is suggested. Engineering principles are presented that establish an energy absorption model as it relates to the implant-assisted prosthesis. In addition, in the course of patient treatment it has been discovered that the intrusion of natural teeth can be reversed. Patient histories that demonstrate intrusion reversal are reviewed. The possible mechanisms for the intrusion/reversal phenomenon are presented and preventative recommendations are given.

  12. An Exact Efficiency Formula for Holographic Heat Engines

    DOE PAGES

    Johnson, Clifford

    2016-03-31

    Further consideration is given to the efficiency of a class of black hole heat engines that perform mechanical work via the pdV terms present in the First Law of extended gravitational thermodynamics. It is noted that, when the engine cycle is a rectangle with sides parallel to the (p,V) axes, the efficiency can be written simply in terms of the mass of the black hole evaluated at the corners. Since an arbitrary cycle can be approximated to any desired accuracy by a tiling of rectangles, a general geometrical algorithm for computing the efficiency of such a cycle follows. Finally, amore » simple generalization of the algorithm renders it applicable to broader classes of heat engine, even beyond the black hole context.« less

  13. Artificial intelligence techniques for ground test monitoring of rocket engines

    NASA Technical Reports Server (NTRS)

    Ali, Moonis; Gupta, U. K.

    1990-01-01

    An expert system is being developed which can detect anomalies in Space Shuttle Main Engine (SSME) sensor data significantly earlier than the redline algorithm currently in use. The training of such an expert system focuses on two approaches which are based on low frequency and high frequency analyses of sensor data. Both approaches are being tested on data from SSME tests and their results compared with the findings of NASA and Rocketdyne experts. Prototype implementations have detected the presence of anomalies earlier than the redline algorithms that are in use currently. It therefore appears that these approaches have the potential of detecting anomalies early eneough to shut down the engine or take other corrective action before severe damage to the engine occurs.

  14. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  15. Low Power S-Box Architecture for AES Algorithm using Programmable Second Order Reversible Cellular Automata: An Application to WBAN.

    PubMed

    Gangadari, Bhoopal Rao; Ahamed, Shaik Rafi

    2016-12-01

    In this paper, we presented a novel approach of low energy consumption architecture of S-Box used in Advanced Encryption Standard (AES) algorithm using programmable second order reversible cellular automata (RCA 2 ). The architecture entails a low power implementation with minimal delay overhead and the performance of proposed RCA 2 based S-Box in terms of security is evaluated using the cryptographic properties such as nonlinearity, correlation immunity bias, strict avalanche criteria, entropy and also found that the proposed architecture is secure enough for cryptographic applications. Moreover, the proposed AES algorithm architecture simulation studies show that energy consumption of 68.726 nJ, power dissipation of 3.856 mW for 0.18- μm at 13.69 MHz and energy consumption of 29.408 nJ, power dissipation of 1.65 mW for 0.13- μm at 13.69 MHz. The proposed AES algorithm with RCA 2 based S-Box shows a reduction power consumption by 50 % and energy consumption by 5 % compared to best classical S-Box and composite field arithmetic based AES algorithm. Apart from that, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible, low power dissipation compared to that of LUT based S-Box and hence suitable for Wireless Body Area Network (WBAN) applications.

  16. Self-recovery reversible image watermarking algorithm

    PubMed Central

    Sun, He; Gao, Shangbing; Jin, Shenghua

    2018-01-01

    The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528

  17. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  18. NARMAX model identification of a palm oil biodiesel engine using multi-objective optimization differential evolution

    NASA Astrophysics Data System (ADS)

    Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin

    2017-09-01

    This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.

  19. 40 CFR 86.1809-12 - Prohibition of defeat devices.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and... manufacturer must submit, with the Part II certification application, an engineering evaluation demonstrating... vehicles, the engineering evaluation must also include particulate emissions. [75 FR 25685, May 7, 2010] ...

  20. Thermoelectric energy converters under a trade-off figure of merit with broken time-reversal symmetry

    NASA Astrophysics Data System (ADS)

    Iyyappan, I.; Ponmurugan, M.

    2017-09-01

    We study the performance of a three-terminal thermoelectric device such as heat engine and refrigerator with broken time-reversal symmetry by applying the unified trade-off figure of merit (\\dotΩ criterion) which accounts for both useful energy and losses. For the heat engine, we find that a thermoelectric device working under the maximum \\dotΩ criterion gives a significantly better performance than a device working at maximum power output. Within the framework of linear irreversible thermodynamics such a direct comparison is not possible for refrigerators, however, our study indicates that, for refrigerator, the maximum cooling load gives a better performance than the maximum \\dotΩ criterion for a larger asymmetry. Our results can be useful to choose a suitable optimization criterion for operating a real thermoelectric device with broken time-reversal symmetry.

  1. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  2. A Feasibility Study of Life-Extending Controls for Aircraft Turbine Engines Using a Generic Air Force Model (Preprint)

    DTIC Science & Technology

    2006-12-01

    intelligent control algorithm embedded in the FADEC . This paper evaluates the LEC, based on critical components research, to demonstrate how an...control action, engine component life usage, and designing an intelligent control algorithm embedded in the FADEC . This paper evaluates the LEC, based on...simulation code for each simulator. One is typically configured to operate as a Full- Authority Digital Electronic Controller ( FADEC

  3. Inferring gene expression from ribosomal promoter sequences, a crowdsourcing approach

    PubMed Central

    Meyer, Pablo; Siwo, Geoffrey; Zeevi, Danny; Sharon, Eilon; Norel, Raquel; Segal, Eran; Stolovitzky, Gustavo; Siwo, Geoffrey; Rider, Andrew K.; Tan, Asako; Pinapati, Richard S.; Emrich, Scott; Chawla, Nitesh; Ferdig, Michael T.; Tung, Yi-An; Chen, Yong-Syuan; Chen, Mei-Ju May; Chen, Chien-Yu; Knight, Jason M.; Sahraeian, Sayed Mohammad Ebrahim; Esfahani, Mohammad Shahrokh; Dreos, Rene; Bucher, Philipp; Maier, Ezekiel; Saeys, Yvan; Szczurek, Ewa; Myšičková, Alena; Vingron, Martin; Klein, Holger; Kiełbasa, Szymon M.; Knisley, Jeff; Bonnell, Jeff; Knisley, Debra; Kursa, Miron B.; Rudnicki, Witold R.; Bhattacharjee, Madhuchhanda; Sillanpää, Mikko J.; Yeung, James; Meysman, Pieter; Rodríguez, Aminael Sánchez; Engelen, Kristof; Marchal, Kathleen; Huang, Yezhou; Mordelet, Fantine; Hartemink, Alexander; Pinello, Luca; Yuan, Guo-Cheng

    2013-01-01

    The Gene Promoter Expression Prediction challenge consisted of predicting gene expression from promoter sequences in a previously unknown experimentally generated data set. The challenge was presented to the community in the framework of the sixth Dialogue for Reverse Engineering Assessments and Methods (DREAM6), a community effort to evaluate the status of systems biology modeling methodologies. Nucleotide-specific promoter activity was obtained by measuring fluorescence from promoter sequences fused upstream of a gene for yellow fluorescence protein and inserted in the same genomic site of yeast Saccharomyces cerevisiae. Twenty-one teams submitted results predicting the expression levels of 53 different promoters from yeast ribosomal protein genes. Analysis of participant predictions shows that accurate values for low-expressed and mutated promoters were difficult to obtain, although in the latter case, only when the mutation induced a large change in promoter activity compared to the wild-type sequence. As in previous DREAM challenges, we found that aggregation of participant predictions provided robust results, but did not fare better than the three best algorithms. Finally, this study not only provides a benchmark for the assessment of methods predicting activity of a specific set of promoters from their sequence, but it also shows that the top performing algorithm, which used machine-learning approaches, can be improved by the addition of biological features such as transcription factor binding sites. PMID:23950146

  4. A shrinking hypersphere PSO for engineering optimisation problems

    NASA Astrophysics Data System (ADS)

    Yadav, Anupam; Deep, Kusum

    2016-03-01

    Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.

  5. Experimental studies of applications of time-reversal acoustics to noncoherent underwater communications.

    PubMed

    Heinemann, M; Larraza, A; Smith, K B

    2003-06-01

    The most difficult problem in shallow underwater acoustic communications is considered to be the time-varying multipath propagation because it impacts negatively on data rates. At high data rates the intersymbol interference requires adaptive algorithms on the receiver side that lead to computationally intensive and complex signal processing. A novel technique called time-reversal acoustics (TRA) can environmentally adapt the acoustic propagation effects of a complex medium in order to focus energy at a particular target range and depth. Using TRA, the multipath structure is reduced because all the propagation paths add coherently at the intended target location. This property of time-reversal acoustics suggests a potential application in the field of noncoherent acoustic communications. This work presents results of a tank scale experiment using an algorithm for rapid transmission of binary data in a complex underwater environment with the TRA approach. A simple 15-symbol code provides an example of the simplicity and feasibility of the approach. Covert coding due to the inherent scrambling induced by the environment at points other than the intended receiver is also investigated. The experiments described suggest a high potential in data rate for the time-reversal approach in underwater acoustic communications while keeping the computational complexity low.

  6. Thrust reverser design studies for an over-the-wing STOL transport

    NASA Technical Reports Server (NTRS)

    Ammer, R. C.; Sowers, H. D.

    1977-01-01

    Aerodynamic and acoustics analytical studies were conducted to evaluate three thrust reverser designs for potential use on commercial over-the-wing STOL transports. The concepts were: (1) integral D nozzle/target reverser, (2) integral D nozzle/top arc cascade reverser, and (3) post exit target reverser integral with wing. Aerodynamic flowpaths and kinematic arrangements for each concept were established to provide a 50% thrust reversal capability. Analytical aircraft stopping distance/noise trade studies conducted concurrently with flow path design showed that these high efficiency reverser concepts are employed at substantially reduced power settings to meet noise goals of 100 PNdB on a 152.4 m sideline and still meet 609.6 m landing runway length requirements. From an overall installation standpoint, only the integral D nozzle/target reverser concept was found to penalize nacelle cruise performance; for this concept a larger nacelle diameter was required to match engine cycle effective area demand in reverse thrust.

  7. Do high school chemistry examinations inhibit deeper level understanding of dynamic reversible chemical reactions?

    NASA Astrophysics Data System (ADS)

    Wheeldon, R.; Atkinson, R.; Dawes, A.; Levinson, R.

    2012-07-01

    Background and purpose : Chemistry examinations can favour the deployment of algorithmic procedures like Le Chatelier's Principle (LCP) rather than reasoning using chemical principles. This study investigated the explanatory resources which high school students use to answer equilibrium problems and whether the marks given for examination answers require students to use approaches beyond direct application of LCP. Sample : The questionnaire was administered to 162 students studying their first year of advanced chemistry (age 16/17) in three high achieving London high schools. Design and methods : The students' explanations of reversible chemical systems were inductively coded to identify the explanatory approaches used and interviews with 13 students were used to check for consistency. AS level examination questions on reversible reactions were analysed to identify the types of explanations sought and the students' performance in these examinations was compared to questionnaire answers. Results : 19% of students used a holistic explanatory approach: when the rates of forward and reverse reactions are correctly described, recognising their simultaneous and mutually dependent nature. 36% used a mirrored reactions approach when the connected nature of the forward and reverse reactions is identified, but not their mutual dependency. 42% failed to recognize the interdependence of forward and reverse reactions (reactions not connected approach). Only 4% of marks for AS examination questions on reversible chemical systems asked for responses which went beyond either direct application of LCP or recall of equilibrium knowledge. 37% of students attained an A grade in their AS national examinations. Conclusions : Examinations favour the application of LCP making it possible to obtain the highest grade with little understanding of reversible chemical systems beyond a direct application of this algorithm. Therefore students' understanding may be attenuated so that they are unable to use kinetic sub-micro level ideas which will support the building of deeper energetic conceptions at university.

  8. Quiet Clean Short-haul Experimental Engine (QCSEE) Under-The-Wing (UTW) engine composite nacelle test report. Volume 1: Summary, aerodynamic and mechanical performance

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The performance test results of the final under-the-wing engine configuration are presented. One hundred and six hours of engine operation were completed, including mechanical and performance checkout, baseline acoustic testing with a bellmouth inlet, reverse thrust testing, acoustic technology tests, and limited controls testing. The engine includes a variable pitch fan having advanced composite fan blades and using a ball-spline pitch actuation system.

  9. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging.

    PubMed

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-11-07

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  10. Algorithm of reducing the false positives in IDS based on correlation Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Jianyi; Li, Sida; Zhang, Ru

    2018-03-01

    This paper proposes an algorithm of reducing the false positives in IDS based on correlation Analysis. Firstly, the algorithm analyzes the distinguishing characteristics of false positives and real alarms, and preliminary screen the false positives; then use the method of attribute similarity clustering to the alarms and further reduces the amount of alarms; finally, according to the characteristics of multi-step attack, associated it by the causal relationship. The paper also proposed a reverse causation algorithm based on the attack association method proposed by the predecessors, turning alarm information into a complete attack path. Experiments show that the algorithm simplifies the number of alarms, improve the efficiency of alarm processing, and contribute to attack purposes identification and alarm accuracy improvement.

  11. 40 CFR 1065.275 - N2O measurement devices.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... infrared (NDIR) analyzer. You may use an NDIR analyzer that has compensation algorithms that are functions... any compensation algorithm is 0% (that is, no bias high and no bias low), regardless of the... has compensation algorithms that are functions of other gaseous measurements and the engine's known or...

  12. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  13. Reverse engineering biological networks :applications in immune responses to bio-toxins.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martino, Anthony A.; Sinclair, Michael B.; Davidson, George S.

    Our aim is to determine the network of events, or the regulatory network, that defines an immune response to a bio-toxin. As a model system, we are studying T cell regulatory network triggered through tyrosine kinase receptor activation using a combination of pathway stimulation and time-series microarray experiments. Our approach is composed of five steps (1) microarray experiments and data error analysis, (2) data clustering, (3) data smoothing and discretization, (4) network reverse engineering, and (5) network dynamics analysis and fingerprint identification. The technological outcome of this study is a suite of experimental protocols and computational tools that reverse engineermore » regulatory networks provided gene expression data. The practical biological outcome of this work is an immune response fingerprint in terms of gene expression levels. Inferring regulatory networks from microarray data is a new field of investigation that is no more than five years old. To the best of our knowledge, this work is the first attempt that integrates experiments, error analyses, data clustering, inference, and network analysis to solve a practical problem. Our systematic approach of counting, enumeration, and sampling networks matching experimental data is new to the field of network reverse engineering. The resulting mathematical analyses and computational tools lead to new results on their own and should be useful to others who analyze and infer networks.« less

  14. Application of neural networks to group technology

    NASA Astrophysics Data System (ADS)

    Caudell, Thomas P.; Smith, Scott D. G.; Johnson, G. C.; Wunsch, Donald C., II

    1991-08-01

    Adaptive resonance theory (ART) neural networks are being developed for application to the industrial engineering problem of group technology--the reuse of engineering designs. Two- and three-dimensional representations of engineering designs are input to ART-1 neural networks to produce groups or families of similar parts. These representations, in their basic form, amount to bit maps of the part, and can become very large when the part is represented in high resolution. This paper describes an enhancement to an algorithmic form of ART-1 that allows it to operate directly on compressed input representations and to generate compressed memory templates. The performance of this compressed algorithm is compared to that of the regular algorithm on real engineering designs and a significant savings in memory storage as well as a speed up in execution is observed. In additions, a `neural database'' system under development is described. This system demonstrates the feasibility of training an ART-1 network to first cluster designs into families, and then to recall the family when presented a similar design. This application is of large practical value to industry, making it possible to avoid duplication of design efforts.

  15. Traffic engineering and regenerator placement in GMPLS networks with restoration

    NASA Astrophysics Data System (ADS)

    Yetginer, Emre; Karasan, Ezhan

    2002-07-01

    In this paper we study regenerator placement and traffic engineering of restorable paths in Generalized Multipro-tocol Label Switching (GMPLS) networks. Regenerators are necessary in optical networks due to transmission impairments. We study a network architecture where there are regenerators at selected nodes and we propose two heuristic algorithms for the regenerator placement problem. Performances of these algorithms in terms of required number of regenerators and computational complexity are evaluated. In this network architecture with sparse regeneration, offline computation of working and restoration paths is studied with bandwidth reservation and path rerouting as the restoration scheme. We study two approaches for selecting working and restoration paths from a set of candidate paths and formulate each method as an Integer Linear Programming (ILP) prob-lem. Traffic uncertainty model is developed in order to compare these methods based on their robustness with respect to changing traffic patterns. Traffic engineering methods are compared based on number of additional demands due to traffic uncertainty that can be carried. Regenerator placement algorithms are also evaluated from a traffic engineering point of view.

  16. Folding and Stabilization of Native-Sequence-Reversed Proteins

    PubMed Central

    Zhang, Yuanzhao; Weber, Jeffrey K; Zhou, Ruhong

    2016-01-01

    Though the problem of sequence-reversed protein folding is largely unexplored, one might speculate that reversed native protein sequences should be significantly more foldable than purely random heteropolymer sequences. In this article, we investigate how the reverse-sequences of native proteins might fold by examining a series of small proteins of increasing structural complexity (α-helix, β-hairpin, α-helix bundle, and α/β-protein). Employing a tandem protein structure prediction algorithmic and molecular dynamics simulation approach, we find that the ability of reverse sequences to adopt native-like folds is strongly influenced by protein size and the flexibility of the native hydrophobic core. For β-hairpins with reverse-sequences that fail to fold, we employ a simple mutational strategy for guiding stable hairpin formation that involves the insertion of amino acids into the β-turn region. This systematic look at reverse sequence duality sheds new light on the problem of protein sequence-structure mapping and may serve to inspire new protein design and protein structure prediction protocols. PMID:27113844

  17. Folding and Stabilization of Native-Sequence-Reversed Proteins

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanzhao; Weber, Jeffrey K.; Zhou, Ruhong

    2016-04-01

    Though the problem of sequence-reversed protein folding is largely unexplored, one might speculate that reversed native protein sequences should be significantly more foldable than purely random heteropolymer sequences. In this article, we investigate how the reverse-sequences of native proteins might fold by examining a series of small proteins of increasing structural complexity (α-helix, β-hairpin, α-helix bundle, and α/β-protein). Employing a tandem protein structure prediction algorithmic and molecular dynamics simulation approach, we find that the ability of reverse sequences to adopt native-like folds is strongly influenced by protein size and the flexibility of the native hydrophobic core. For β-hairpins with reverse-sequences that fail to fold, we employ a simple mutational strategy for guiding stable hairpin formation that involves the insertion of amino acids into the β-turn region. This systematic look at reverse sequence duality sheds new light on the problem of protein sequence-structure mapping and may serve to inspire new protein design and protein structure prediction protocols.

  18. Correction of geometric distortion in Propeller echo planar imaging using a modified reversed gradient approach

    PubMed Central

    Chang, Hing-Chiu; Chuang, Tzu-Chao; Wang, Fu-Nien; Huang, Teng-Yi; Chung, Hsiao-Wen

    2013-01-01

    Objective This study investigates the application of a modified reversed gradient algorithm to the Propeller-EPI imaging method (periodically rotated overlapping parallel lines with enhanced reconstruction based on echo-planar imaging readout) for corrections of geometric distortions due to the EPI readout. Materials and methods Propeller-EPI acquisition was executed with 360-degree rotational coverage of the k-space, from which the image pairs with opposite phase-encoding gradient polarities were extracted for reversed gradient geometric and intensity corrections. The spatial displacements obtained on a pixel-by-pixel basis were fitted using a two-dimensional polynomial followed by low-pass filtering to assure correction reliability in low-signal regions. Single-shot EPI images were obtained on a phantom, whereas high spatial resolution T2-weighted and diffusion tensor Propeller-EPI data were acquired in vivo from healthy subjects at 3.0 Tesla, to demonstrate the effectiveness of the proposed algorithm. Results Phantom images show success of the smoothed displacement map concept in providing improvements of the geometric corrections at low-signal regions. Human brain images demonstrate prominently superior reconstruction quality of Propeller-EPI images with modified reversed gradient corrections as compared with those obtained without corrections, as evidenced from verification against the distortion-free fast spin-echo images at the same level. Conclusions The modified reversed gradient method is an effective approach to obtain high-resolution Propeller-EPI images with substantially reduced artifacts. PMID:23630654

  19. 40 CFR 86.1809-10 - Prohibition of defeat devices.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and..., with the Part II certification application, an engineering evaluation demonstrating to the satisfaction... not occur in the temperature range of 20 to 86 °F. For diesel vehicles, the engineering evaluation...

  20. System Re-engineering Project Executive Summary

    DTIC Science & Technology

    1991-11-01

    Management Information System (STAMIS) application. This project involved reverse engineering, evaluation of structured design and object-oriented design, and re- implementation of the system in Ada. This executive summary presents the approach to re-engineering the system, the lessons learned while going through the process, and issues to be considered in future tasks of this nature.... Computer-Aided Software Engineering (CASE), Distributed Software, Ada, COBOL, Systems Analysis, Systems Design, Life Cycle Development, Functional Decomposition, Object-Oriented

  1. Gender Differences in the Consistency of Middle School Students' Interest in Engineering and Science Careers

    ERIC Educational Resources Information Center

    Ing, Marsha; Aschbacher, Pamela R.; Tsai, Sherry M.

    2014-01-01

    This longitudinal study analyzes survey responses in seventh, eighth, and ninth grade from diverse public school students (n = 482) to explore gender differences in engineering and science career preferences. Females were far more likely to express interest in a science career (31%) than an engineering career (13%), while the reverse was true for…

  2. Reverse Engineering and Software Products Reuse to Teach Collaborative Web Portals: A Case Study with Final-Year Computer Science Students

    ERIC Educational Resources Information Center

    Medina-Dominguez, Fuensanta; Sanchez-Segura, Maria-Isabel; Mora-Soto, Arturo; Amescua, Antonio

    2010-01-01

    The development of collaborative Web applications does not follow a software engineering methodology. This is because when university students study Web applications in general, and collaborative Web portals in particular, they are not being trained in the use of software engineering techniques to develop collaborative Web portals. This paper…

  3. Table-driven image transformation engine algorithm

    NASA Astrophysics Data System (ADS)

    Shichman, Marc

    1993-04-01

    A high speed image transformation engine (ITE) was designed and a prototype built for use in a generic electronic light table and image perspective transformation application code. The ITE takes any linear transformation, breaks the transformation into two passes and resamples the image appropriately for each pass. The system performance is achieved by driving the engine with a set of look up tables computed at start up time for the calculation of pixel output contributions. Anti-aliasing is done automatically in the image resampling process. Operations such as multiplications and trigonometric functions are minimized. This algorithm can be used for texture mapping, image perspective transformation, electronic light table, and virtual reality.

  4. A novel feedback algorithm for simulating controlled dynamics and confinement in the advanced reversed-field pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlin, J.-E.; Scheffel, J.

    2005-06-15

    In the advanced reversed-field pinch (RFP), the current density profile is externally controlled to diminish tearing instabilities. Thus the scaling of energy confinement time with plasma current and density is improved substantially as compared to the conventional RFP. This may be numerically simulated by introducing an ad hoc electric field, adjusted to generate a tearing mode stable parallel current density profile. In the present work a current profile control algorithm, based on feedback of the fluctuating electric field in Ohm's law, is introduced into the resistive magnetohydrodynamic code DEBSP [D. D. Schnack and D. C. Baxter, J. Comput. Phys. 55,more » 485 (1984); D. D. Schnack, D. C. Barnes, Z. Mikic, D. S. Marneal, E. J. Caramana, and R. A. Nebel, Comput. Phys. Commun. 43, 17 (1986)]. The resulting radial magnetic field is decreased considerably, causing an increase in energy confinement time and poloidal {beta}. It is found that the parallel current density profile spontaneously becomes hollow, and that a formation, being related to persisting resistive g modes, appears close to the reversal surface.« less

  5. Method for controlling a motor vehicle powertrain

    DOEpatents

    Burba, Joseph C.; Landman, Ronald G.; Patil, Prabhakar B.; Reitz, Graydon A.

    1990-01-01

    A multiple forward speed automatic transmission produces its lowest forward speed ratio when a hydraulic clutch and hydraulic brake are disengaged and a one-way clutch connects a ring gear to the transmission casing. Second forward speed ratio results when the hydraulic clutch is engaged to connect the ring gear to the planetary carrier of a second gear set. Reverse drive and regenerative operation result when an hydraulic brake fixes the planetary and the direction of power flow is reversed. Various sensors produce signals representing the position of the gear selector lever operated manually by the vehicle operator, the speed of the power source, the state of the ignition key, and the rate of release of an accelerator pedal. A control algorithm produces input data representing a commanded upshift, a commanded downshift and a torque command and various constant torque signals. A microprocessor processes the input and produces a response to them in accordance with the execution of a control algorithm. Output or response signals cause selective engagement and disengagement of the clutch and brake to produce the forward drive, reverse and regenerative operation of the transmission.

  6. Method for controlling a motor vehicle powertrain

    DOEpatents

    Burba, J.C.; Landman, R.G.; Patil, P.B.; Reitz, G.A.

    1990-05-22

    A multiple forward speed automatic transmission produces its lowest forward speed ratio when a hydraulic clutch and hydraulic brake are disengaged and a one-way clutch connects a ring gear to the transmission casing. Second forward speed ratio results when the hydraulic clutch is engaged to connect the ring gear to the planetary carrier of a second gear set. Reverse drive and regenerative operation result when an hydraulic brake fixes the planetary and the direction of power flow is reversed. Various sensors produce signals representing the position of the gear selector lever operated manually by the vehicle operator, the speed of the power source, the state of the ignition key, and the rate of release of an accelerator pedal. A control algorithm produces input data representing a commanded upshift, a commanded downshift and a torque command and various constant torque signals. A microprocessor processes the input and produces a response to them in accordance with the execution of a control algorithm. Output or response signals cause selective engagement and disengagement of the clutch and brake to produce the forward drive, reverse and regenerative operation of the transmission. 7 figs.

  7. Computational discovery and in vivo validation of hnf4 as a regulatory gene in planarian regeneration.

    PubMed

    Lobo, Daniel; Morokuma, Junji; Levin, Michael

    2016-09-01

    Automated computational methods can infer dynamic regulatory network models directly from temporal and spatial experimental data, such as genetic perturbations and their resultant morphologies. Recently, a computational method was able to reverse-engineer the first mechanistic model of planarian regeneration that can recapitulate the main anterior-posterior patterning experiments published in the literature. Validating this comprehensive regulatory model via novel experiments that had not yet been performed would add in our understanding of the remarkable regeneration capacity of planarian worms and demonstrate the power of this automated methodology. Using the Michigan Molecular Interactions and STRING databases and the MoCha software tool, we characterized as hnf4 an unknown regulatory gene predicted to exist by the reverse-engineered dynamic model of planarian regeneration. Then, we used the dynamic model to predict the morphological outcomes under different single and multiple knock-downs (RNA interference) of hnf4 and its predicted gene pathway interactors β-catenin and hh Interestingly, the model predicted that RNAi of hnf4 would rescue the abnormal regenerated phenotype (tailless) of RNAi of hh in amputated trunk fragments. Finally, we validated these predictions in vivo by performing the same surgical and genetic experiments with planarian worms, obtaining the same phenotypic outcomes predicted by the reverse-engineered model. These results suggest that hnf4 is a regulatory gene in planarian regeneration, validate the computational predictions of the reverse-engineered dynamic model, and demonstrate the automated methodology for the discovery of novel genes, pathways and experimental phenotypes. michael.levin@tufts.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  9. Generation of recombinant rotaviruses expressing fluorescent proteins using an optimized reverse genetics system.

    PubMed

    Komoto, Satoshi; Fukuda, Saori; Ide, Tomihiko; Ito, Naoto; Sugiyama, Makoto; Yoshikawa, Tetsushi; Murata, Takayuki; Taniguchi, Koki

    2018-04-18

    An entirely plasmid-based reverse genetics system for rotaviruses was established very recently. We improved the reverse genetics system to generate recombinant rotavirus by transfecting only 11 cDNA plasmids for its 11 gene segments under the condition of increasing the ratio of the cDNA plasmids for NSP2 and NSP5 genes. Utilizing this highly efficient system, we then engineered infectious recombinant rotaviruses expressing bioluminescent (NanoLuc luciferase) and fluorescent (EGFP and mCherry) reporters. These recombinant rotaviruses expressing reporters remained genetically stable during serial passages. Our reverse genetics approach and recombinant rotaviruses carrying reporter genes will be great additions to the tool kit for studying the molecular virology of rotavirus, and for developing future next-generation vaccines and expression vectors. IMPORTANCE Rotavirus is one of the most important pathogens causing severe gastroenteritis in young children worldwide. In this paper, we describe a robust and simple reverse genetics system based on only rotavirus cDNAs, and its application for engineering infectious recombinant rotaviruses harboring bioluminescent (NanoLuc) and fluorescent (EGFP and mCherry) protein genes. This highly efficient reverse genetics system and recombinant RVAs expressing reporters could be powerful tools for the study of different aspects of rotavirus replication. Furthermore, they may be useful for next-generation vaccine production for this medically important virus. Copyright © 2018 American Society for Microbiology.

  10. An approach in building a chemical compound search engine in oracle database.

    PubMed

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.

  11. 40 CFR 1065.250 - Nondispersive infrared analyzer.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no bias high and...

  12. 40 CFR 1065.284 - Zirconia (ZrO2) analyzer.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no...

  13. 40 CFR 1065.250 - Nondispersive infra-red analyzer.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... analyzer that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no...

  14. 40 CFR 1065.284 - Zirconia (ZrO2) analyzer.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no...

  15. 40 CFR 1065.270 - Chemiluminescent detector.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no bias high and no bias low...

  16. 40 CFR 1065.284 - Zirconia (ZrO2) analyzer.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no...

  17. 40 CFR 1065.270 - Chemiluminescent detector.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no bias high and no bias low...

  18. 40 CFR 1065.250 - Nondispersive infrared analyzer.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no bias high and...

  19. 40 CFR 1065.270 - Chemiluminescent detector.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no bias high and no bias low...

  20. 40 CFR 1065.284 - Zirconia (ZrO2) analyzer.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no...

  1. 40 CFR 1065.270 - Chemiluminescent detector.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no bias high and no bias low...

  2. 40 CFR 1065.250 - Nondispersive infra-red analyzer.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... analyzer that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no...

  3. Cascade Optimization Strategy with Neural Network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Patnaik, Surya N.

    2000-01-01

    A preliminary aircraft engine design methodology is being developed that utilizes a cascade optimization strategy together with neural network and regression approximation methods. The cascade strategy employs different optimization algorithms in a specified sequence. The neural network and regression methods are used to approximate solutions obtained from the NASA Engine Performance Program (NEPP), which implements engine thermodynamic cycle and performance analysis models. The new methodology is proving to be more robust and computationally efficient than the conventional optimization approach of using a single optimization algorithm with direct reanalysis. The methodology has been demonstrated on a preliminary design problem for a novel subsonic turbofan engine concept that incorporates a wave rotor as a cycle-topping device. Computations of maximum thrust were obtained for a specific design point in the engine mission profile. The results (depicted in the figure) show a significant improvement in the maximum thrust obtained using the new methodology in comparison to benchmark solutions obtained using NEPP in a manual design mode.

  4. A group LASSO-based method for robustly inferring gene regulatory networks from multiple time-course datasets.

    PubMed

    Liu, Li-Zhi; Wu, Fang-Xiang; Zhang, Wen-Jun

    2014-01-01

    As an abstract mapping of the gene regulations in the cell, gene regulatory network is important to both biological research study and practical applications. The reverse engineering of gene regulatory networks from microarray gene expression data is a challenging research problem in systems biology. With the development of biological technologies, multiple time-course gene expression datasets might be collected for a specific gene network under different circumstances. The inference of a gene regulatory network can be improved by integrating these multiple datasets. It is also known that gene expression data may be contaminated with large errors or outliers, which may affect the inference results. A novel method, Huber group LASSO, is proposed to infer the same underlying network topology from multiple time-course gene expression datasets as well as to take the robustness to large error or outliers into account. To solve the optimization problem involved in the proposed method, an efficient algorithm which combines the ideas of auxiliary function minimization and block descent is developed. A stability selection method is adapted to our method to find a network topology consisting of edges with scores. The proposed method is applied to both simulation datasets and real experimental datasets. It shows that Huber group LASSO outperforms the group LASSO in terms of both areas under receiver operating characteristic curves and areas under the precision-recall curves. The convergence analysis of the algorithm theoretically shows that the sequence generated from the algorithm converges to the optimal solution of the problem. The simulation and real data examples demonstrate the effectiveness of the Huber group LASSO in integrating multiple time-course gene expression datasets and improving the resistance to large errors or outliers.

  5. Experimental and analytical study of secondary path variations in active engine mounts

    NASA Astrophysics Data System (ADS)

    Hausberg, Fabian; Scheiblegger, Christian; Pfeffer, Peter; Plöchl, Manfred; Hecker, Simon; Rupp, Markus

    2015-03-01

    Active engine mounts (AEMs) provide an effective solution to further improve the acoustic and vibrational comfort of passenger cars. Typically, adaptive feedforward control algorithms, e.g., the filtered-x-least-mean-squares (FxLMS) algorithm, are applied to cancel disturbing engine vibrations. These algorithms require an accurate estimate of the AEM active dynamic characteristics, also known as the secondary path, in order to guarantee control performance and stability. This paper focuses on the experimental and theoretical study of secondary path variations in AEMs. The impact of three major influences, namely nonlinearity, change of preload and component temperature, on the AEM active dynamic characteristics is experimentally analyzed. The obtained test results are theoretically investigated with a linear AEM model which incorporates an appropriate description for elastomeric components. A special experimental set-up extends the model validation of the active dynamic characteristics to higher frequencies up to 400 Hz. The theoretical and experimental results show that significant secondary path variations are merely observed in the frequency range of the AEM actuator's resonance frequency. These variations mainly result from the change of the component temperature. As the stability of the algorithm is primarily affected by the actuator's resonance frequency, the findings of this paper facilitate the design of AEMs with simpler adaptive feedforward algorithms. From a practical point of view it may further be concluded that algorithmic countermeasures against instability are only necessary in the frequency range of the AEM actuator's resonance frequency.

  6. Reversible simulation of irreversible computation

    NASA Astrophysics Data System (ADS)

    Li, Ming; Tromp, John; Vitányi, Paul

    1998-09-01

    Computer computations are generally irreversible while the laws of physics are reversible. This mismatch is penalized by among other things generating excess thermic entropy in the computation. Computing performance has improved to the extent that efficiency degrades unless all algorithms are executed reversibly, for example by a universal reversible simulation of irreversible computations. All known reversible simulations are either space hungry or time hungry. The leanest method was proposed by Bennett and can be analyzed using a simple ‘reversible’ pebble game. The reachable reversible simulation instantaneous descriptions (pebble configurations) of such pebble games are characterized completely. As a corollary we obtain the reversible simulation by Bennett and, moreover, show that it is a space-optimal pebble game. We also introduce irreversible steps and give a theorem on the tradeoff between the number of allowed irreversible steps and the memory gain in the pebble game. In this resource-bounded setting the limited erasing needs to be performed at precise instants during the simulation. The reversible simulation can be modified so that it is applicable also when the simulated computation time is unknown.

  7. Optimization of diesel engine performance by the Bees Algorithm

    NASA Astrophysics Data System (ADS)

    Azfanizam Ahmad, Siti; Sunthiram, Devaraj

    2018-03-01

    Biodiesel recently has been receiving a great attention in the world market due to the depletion of the existing fossil fuels. Biodiesel also becomes an alternative for diesel No. 2 fuel which possesses characteristics such as biodegradable and oxygenated. However, there are facts suggested that biodiesel does not have the equivalent features as diesel No. 2 fuel as it has been claimed that the usage of biodiesel giving increment in the brake specific fuel consumption (BSFC). The objective of this study is to find the maximum brake power and brake torque as well as the minimum BSFC to optimize the condition of diesel engine when using the biodiesel fuel. This optimization was conducted using the Bees Algorithm (BA) under specific biodiesel percentage in fuel mixture, engine speed and engine load. The result showed that 58.33kW of brake power, 310.33 N.m of brake torque and 200.29/(kW.h) of BSFC were the optimum value. Comparing to the ones obtained by other algorithm, the BA produced a fine brake power and a better brake torque and BSFC. This finding proved that the BA can be used to optimize the performance of diesel engine based on the optimum value of the brake power, brake torque and BSFC.

  8. A General Tool for Engineering the NAD/NADP Cofactor Preference of Oxidoreductases.

    PubMed

    Cahn, Jackson K B; Werlang, Caroline A; Baumschlager, Armin; Brinkmann-Chen, Sabine; Mayo, Stephen L; Arnold, Frances H

    2017-02-17

    The ability to control enzymatic nicotinamide cofactor utilization is critical for engineering efficient metabolic pathways. However, the complex interactions that determine cofactor-binding preference render this engineering particularly challenging. Physics-based models have been insufficiently accurate and blind directed evolution methods too inefficient to be widely adopted. Building on a comprehensive survey of previous studies and our own prior engineering successes, we present a structure-guided, semirational strategy for reversing enzymatic nicotinamide cofactor specificity. This heuristic-based approach leverages the diversity and sensitivity of catalytically productive cofactor binding geometries to limit the problem to an experimentally tractable scale. We demonstrate the efficacy of this strategy by inverting the cofactor specificity of four structurally diverse NADP-dependent enzymes: glyoxylate reductase, cinnamyl alcohol dehydrogenase, xylose reductase, and iron-containing alcohol dehydrogenase. The analytical components of this approach have been fully automated and are available in the form of an easy-to-use web tool: Cofactor Specificity Reversal-Structural Analysis and Library Design (CSR-SALAD).

  9. The effect of the electron–phonon interaction on reverse currents of GaAs-based p–n junctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhukov, A. V., E-mail: ZhukovAndreyV@mail.ru

    An algorithm for calculating the parameters of the electron–phonon interaction of the EL2 trap has been developed and implemented based on the example of GaAs. Using the obtained parameters, the field dependences of the probabilities of nonradiative transitions from the trap and reverse currents of the GaAs p–n junctions are calculated, which are in good agreement with the experimental data.

  10. Cheaper Adjoints by Reversing Address Computations

    DOE PAGES

    Hascoët, L.; Utke, J.; Naumann, U.

    2008-01-01

    The reverse mode of automatic differentiation is widely used in science and engineering. A severe bottleneck for the performance of the reverse mode, however, is the necessity to recover certain intermediate values of the program in reverse order. Among these values are computed addresses, which traditionally are recovered through forward recomputation and storage in memory. We propose an alternative approach for recovery that uses inverse computation based on dependency information. Address storage constitutes a significant portion of the overall storage requirements. An example illustrates substantial gains that the proposed approach yields, and we show use cases in practical applications.

  11. 14 CFR 25.933 - Reversing systems.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... analysis or testing, or both, for propeller systems that allow propeller blades to move from the flight low... reversal in flight the engine will produce no more than flight idle thrust. In addition, it must be shown... position; and (ii) The airplane is capable of continued safe flight and landing under any possible position...

  12. 14 CFR 25.933 - Reversing systems.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... analysis or testing, or both, for propeller systems that allow propeller blades to move from the flight low... reversal in flight the engine will produce no more than flight idle thrust. In addition, it must be shown... position; and (ii) The airplane is capable of continued safe flight and landing under any possible position...

  13. Practical sliced configuration spaces for curved planar pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sacks, E.

    1999-01-01

    In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less

  14. Real-time micro-vibration multi-spot synchronous measurement within a region based on heterodyne interference

    NASA Astrophysics Data System (ADS)

    Lan, Ma; Xiao, Wen; Chen, Zonghui; Hao, Hongliang; Pan, Feng

    2018-01-01

    Real-time micro-vibration measurement is widely used in engineering applications. It is very difficult for traditional optical detection methods to achieve real-time need in a relatively high frequency and multi-spot synchronous measurement of a region at the same time,especially at the nanoscale. Based on the method of heterodyne interference, an experimental system of real-time measurement of micro - vibration is constructed to satisfy the demand in engineering applications. The vibration response signal is measured by combing optical heterodyne interferometry and a high-speed CMOS-DVR image acquisition system. Then, by extracting and processing multiple pixels at the same time, four digital demodulation technique are implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. Different kinds of demodulation algorithms are analyzed and the results show that these four demodulation algorithms are suitable for different interference signals. Both autocorrelation algorithm and cross-correlation algorithm meet the needs of real-time measurements. The autocorrelation algorithm demodulates the frequency more accurately, while the cross-correlation algorithm is more accurate in solving the amplitude.

  15. Design of cryptographically secure AES like S-Box using second-order reversible cellular automata for wireless body area network applications.

    PubMed

    Gangadari, Bhoopal Rao; Rafi Ahamed, Shaik

    2016-09-01

    In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA 2 ) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA 2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA 2 based S-Box have comparatively better performance than that of conventional LUT based S-Box.

  16. Design of cryptographically secure AES like S-Box using second-order reversible cellular automata for wireless body area network applications

    PubMed Central

    Rafi Ahamed, Shaik

    2016-01-01

    In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA2) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA2 based S-Box have comparatively better performance than that of conventional LUT based S-Box. PMID:27733924

  17. Angles-centroids fitting calibration and the centroid algorithm applied to reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Zhao, Zhu; Hui, Mei; Xia, Zhengzheng; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Kong, Lingqin; Zhao, Yuejin

    2017-02-01

    In this paper, we develop an angles-centroids fitting (ACF) system and the centroid algorithm to calibrate the reverse Hartmann test (RHT) with sufficient precision. The essence of ACF calibration is to establish the relationship between ray angles and detector coordinates. Centroids computation is used to find correspondences between the rays of datum marks and detector pixels. Here, the point spread function of RHT is classified as circle of confusion (CoC), and the fitting of a CoC spot with 2D Gaussian profile to identify the centroid forms the basis of the centroid algorithm. Theoretical and experimental results of centroids computation demonstrate that the Gaussian fitting method has a less centroid shift or the shift grows at a slower pace when the quality of the image is reduced. In ACF tests, the optical instrumental alignments reach an overall accuracy of 0.1 pixel with the application of laser spot centroids tracking program. Locating the crystal at different positions, the feasibility and accuracy of ACF calibration are further validated to 10-6-10-4 rad root-mean-square error of the calibrations differences.

  18. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella

    2018-03-01

    Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.

  19. Using Collision Cones to Asses Biological Deconiction Methods

    NASA Astrophysics Data System (ADS)

    Brace, Natalie

    For autonomous vehicles to navigate the world as efficiently and effectively as biological species, improvements are needed in terms of control strategies and estimation algorithms. Reactive collision avoidance is one specific area where biological systems outperform engineered algorithms. To better understand the discrepancy between engineered and biological systems, a collision avoidance algorithm was applied to frames of trajectory data from three biological species (Myotis velifer, Hirundo rustica, and Danio aequipinnatus). The algorithm uses information that can be sensed through visual cues (relative position and velocity) to define collision cones which are used to determine if agents are on a collision course and if so, to find a safe velocity that requires minimal deviation from the original velocity for each individual agent. Two- and three-dimensional versions of the algorithm with constant speed and maximum speed velocity requirements were considered. The obstacles provided to the algorithm were determined by the sensing range in terms of either metric or topological distance. The calculated velocities showed good correlation with observed velocities over the range of sensing parameters, indicating that the algorithm is a good basis for comparison and could potentially be improved with further study.

  20. Investigation of two-dimensional wedge exhaust nozzles for advanced aircraft

    NASA Technical Reports Server (NTRS)

    Maiden, D. L.; Petit, J. E.

    1975-01-01

    Two-dimensional wedge nozzle performance characteristics were investigated in a series of wind-tunnel tests. An isolated single-engine/nozzle model was used to study the effects of internal expansion area ratio, aftbody cowl boattail angle, and wedge length. An integrated twin-engine/nozzle model, tested with and without empenage surfaces, included cruise, acceleration, thrust vectoring and thrust reversing nozzle operating modes. Results indicate that the thrust-minus-aftbody drag performance of the twin two-dimensional nozzle integration is significantly higher, for speeds greater than Mach 0.8, than the performance achieved with twin axisymmetric nozzle installations. Significant jet-induced lift was obtained on an aft-mounted lifting surface using a cambered wedge center body to vector thrust. The thrust reversing capabilities of reverser panels installed on the two-dimensional wedge center body were very effective for static or in-flight operation.

  1. Reverse engineering the mechanical and molecular pathways in stem cell morphogenesis.

    PubMed

    Lu, Kai; Gordon, Richard; Cao, Tong

    2015-03-01

    The formation of relevant biological structures poses a challenge for regenerative medicine. During embryogenesis, embryonic cells differentiate into somatic tissues and undergo morphogenesis to produce three-dimensional organs. Using stem cells, we can recapitulate this process and create biological constructs for therapeutic transplantation. However, imperfect imitation of nature sometimes results in in vitro artifacts that fail to recapitulate the function of native organs. It has been hypothesized that developing cells may self-organize into tissue-specific structures given a correct in vitro environment. This proposition is supported by the generation of neo-organoids from stem cells. We suggest that morphogenesis may be reverse engineered to uncover its interacting mechanical pathway and molecular circuitry. By harnessing the latent architecture of stem cells, novel tissue-engineering strategies may be conceptualized for generating self-organizing transplants. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Pollution Reduction Technology Program, Turboprop Engines, Phase 1

    NASA Technical Reports Server (NTRS)

    Anderson, R. D.; Herman, A. S.; Tomlinson, J. G.; Vaught, J. M.; Verdouw, A. J.

    1976-01-01

    Exhaust pollutant emissions were measured from a 501-D22A turboprop engine combustor and three low emission combustor types -- reverse flow, prechamber, and staged fuel, operating over a fuel-air ratio range of .0096 to .020. The EPAP LTO cycle data were obtained for a total of nineteen configurations. Hydrocarbon emissions were reduced from 15.0 to .3 lb/1000 Hp-Hr/cycle, CO from 31.5 to 4.6 lb/1000 Hp-Hr/cycle with an increase in NOx of 17 percent, which is still 25% below the program goal. The smoke number was reduced from 59 to 17. Emissions given here are for the reverse flow Mod. IV combustor which is the best candidate for further development into eventual use with the 501-D22A turboprop engine. Even lower emissions were obtained with the advanced technology combustors.

  3. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  4. Biogeography-based particle swarm optimization with fuzzy elitism and its applications to constrained engineering problems

    NASA Astrophysics Data System (ADS)

    Guo, Weian; Li, Wuzhao; Zhang, Qun; Wang, Lei; Wu, Qidi; Ren, Hongliang

    2014-11-01

    In evolutionary algorithms, elites are crucial to maintain good features in solutions. However, too many elites can make the evolutionary process stagnate and cannot enhance the performance. This article employs particle swarm optimization (PSO) and biogeography-based optimization (BBO) to propose a hybrid algorithm termed biogeography-based particle swarm optimization (BPSO) which could make a large number of elites effective in searching optima. In this algorithm, the whole population is split into several subgroups; BBO is employed to search within each subgroup and PSO for the global search. Since not all the population is used in PSO, this structure overcomes the premature convergence in the original PSO. Time complexity analysis shows that the novel algorithm does not increase the time consumption. Fourteen numerical benchmarks and four engineering problems with constraints are used to test the BPSO. To better deal with constraints, a fuzzy strategy for the number of elites is investigated. The simulation results validate the feasibility and effectiveness of the proposed algorithm.

  5. Spectral unmixing of agents on surfaces for the Joint Contaminated Surface Detector (JCSD)

    NASA Astrophysics Data System (ADS)

    Slamani, Mohamed-Adel; Chyba, Thomas H.; LaValley, Howard; Emge, Darren

    2007-09-01

    ITT Corporation, Advanced Engineering and Sciences Division, is currently developing the Joint Contaminated Surface Detector (JCSD) technology under an Advanced Concept Technology Demonstration (ACTD) managed jointly by the U.S. Army Research, Development, and Engineering Command (RDECOM) and the Joint Project Manager for Nuclear, Biological, and Chemical Contamination Avoidance for incorporation on the Army's future reconnaissance vehicles. This paper describes the design of the chemical agent identification (ID) algorithm associated with JCSD. The algorithm detects target chemicals mixed with surface and interferent signatures. Simulated data sets were generated from real instrument measurements to support a matrix of parameters based on a Design Of Experiments approach (DOE). Decisions based on receiver operating characteristics (ROC) curves and area-under-the-curve (AUC) measures were used to down-select between several ID algorithms. Results from top performing algorithms were then combined via a fusion approach to converge towards optimum rates of detections and false alarms. This paper describes the process associated with the algorithm design and provides an illustrating example.

  6. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  7. Intelligent Life-Extending Controls for Aircraft Engines Studied

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei

    2005-01-01

    Current aircraft engine controllers are designed and operated to provide desired performance and stability margins. Except for the hard limits for extreme conditions, engine controllers do not usually take engine component life into consideration during the controller design and operation. The end result is that aircraft pilots regularly operate engines under unnecessarily harsh conditions to strive for optimum performance. The NASA Glenn Research Center and its industrial and academic partners have been working together toward an intelligent control concept that will include engine life as part of the controller design criteria. This research includes the study of the relationship between control action and engine component life as well as the design of an intelligent control algorithm to provide proper tradeoffs between performance and engine life. This approach is expected to maintain operating safety while minimizing overall operating costs. In this study, the thermomechanical fatigue (TMF) of a critical component was selected to demonstrate how an intelligent engine control algorithm can significantly extend engine life with only a very small sacrifice in performance. An intelligent engine control scheme based on modifying the high-pressure spool speed (NH) was proposed to reduce TMF damage from ground idle to takeoff. The NH acceleration schedule was optimized to minimize the TMF damage for a given rise-time constraint, which represents the performance requirement. The intelligent engine control scheme was used to simulate a commercial short-haul aircraft engine.

  8. E-TALEN: a web tool to design TALENs for genome engineering.

    PubMed

    Heigwer, Florian; Kerr, Grainne; Walther, Nike; Glaeser, Kathrin; Pelz, Oliver; Breinig, Marco; Boutros, Michael

    2013-11-01

    Use of transcription activator-like effector nucleases (TALENs) is a promising new technique in the field of targeted genome engineering, editing and reverse genetics. Its applications span from introducing knockout mutations to endogenous tagging of proteins and targeted excision repair. Owing to this wide range of possible applications, there is a need for fast and user-friendly TALEN design tools. We developed E-TALEN (http://www.e-talen.org), a web-based tool to design TALENs for experiments of varying scale. E-TALEN enables the design of TALENs against a single target or a large number of target genes. We significantly extended previously published design concepts to consider genomic context and different applications. E-TALEN guides the user through an end-to-end design process of de novo TALEN pairs, which are specific to a certain sequence or genomic locus. Furthermore, E-TALEN offers a functionality to predict targeting and specificity for existing TALENs. Owing to the computational complexity of many of the steps in the design of TALENs, particular emphasis has been put on the implementation of fast yet accurate algorithms. We implemented a user-friendly interface, from the input parameters to the presentation of results. An additional feature of E-TALEN is the in-built sequence and annotation database available for many organisms, including human, mouse, zebrafish, Drosophila and Arabidopsis, which can be extended in the future.

  9. Heuristics for Relevancy Ranking of Earth Dataset Search Results

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Quinn, P.; Norton, J.

    2016-12-01

    As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.

  10. Heuristics for Relevancy Ranking of Earth Dataset Search Results

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Quinn, Patrick; Norton, James

    2016-01-01

    As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.

  11. Relevancy Ranking of Satellite Dataset Search Results

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Quinn, Patrick; Norton, James

    2017-01-01

    As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.

  12. Reverse-engineering the genetic circuitry of a cancer cell with predicted intervention in chronic lymphocytic leukemia.

    PubMed

    Vallat, Laurent; Kemper, Corey A; Jung, Nicolas; Maumy-Bertrand, Myriam; Bertrand, Frédéric; Meyer, Nicolas; Pocheville, Arnaud; Fisher, John W; Gribben, John G; Bahram, Seiamak

    2013-01-08

    Cellular behavior is sustained by genetic programs that are progressively disrupted in pathological conditions--notably, cancer. High-throughput gene expression profiling has been used to infer statistical models describing these cellular programs, and development is now needed to guide orientated modulation of these systems. Here we develop a regression-based model to reverse-engineer a temporal genetic program, based on relevant patterns of gene expression after cell stimulation. This method integrates the temporal dimension of biological rewiring of genetic programs and enables the prediction of the effect of targeted gene disruption at the system level. We tested the performance accuracy of this model on synthetic data before reverse-engineering the response of primary cancer cells to a proliferative (protumorigenic) stimulation in a multistate leukemia biological model (i.e., chronic lymphocytic leukemia). To validate the ability of our method to predict the effects of gene modulation on the global program, we performed an intervention experiment on a targeted gene. Comparison of the predicted and observed gene expression changes demonstrates the possibility of predicting the effects of a perturbation in a gene regulatory network, a first step toward an orientated intervention in a cancer cell genetic program.

  13. Parameter estimation in spiking neural networks: a reverse-engineering approach.

    PubMed

    Rostro-Gonzalez, H; Cessac, B; Vieville, T

    2012-04-01

    This paper presents a reverse engineering approach for parameter estimation in spiking neural networks (SNNs). We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate and fire type. Our approach aims at by-passing the fact that the parameter estimation in SNN results in a non-deterministic polynomial-time hard problem when delays are to be considered. Here, this assumption has been reformulated as a linear programming (LP) problem in order to perform the solution in a polynomial time. Besides, the LP problem formulation makes the fact that the reverse engineering of a neural network can be performed from the observation of the spike times explicit. Furthermore, we point out how the LP adjustment mechanism is local to each neuron and has the same structure as a 'Hebbian' rule. Finally, we present a generalization of this approach to the design of input-output (I/O) transformations as a practical method to 'program' a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.

  14. Topology reconstruction for B-Rep modeling from 3D mesh in reverse engineering applications

    NASA Astrophysics Data System (ADS)

    Bénière, Roseline; Subsol, Gérard; Gesquière, Gilles; Le Breton, François; Puech, William

    2012-03-01

    Nowadays, most of the manufactured objects are designed using CAD (Computer-Aided Design) software. Nevertheless, for visualization, data exchange or manufacturing applications, the geometric model has to be discretized into a 3D mesh composed of a finite number of vertices and edges. But, in some cases, the initial model may be lost or unavailable. In other cases, the 3D discrete representation may be modified, for example after a numerical simulation, and does not correspond anymore to the initial model. A reverse engineering method is then required to reconstruct a 3D continuous representation from the discrete one. In previous work, we have presented a new approach for 3D geometric primitive extraction. In this paper, to complete our automatic and comprehensive reverse engineering process, we propose a method to construct the topology of the retrieved object. To reconstruct a B-Rep model, a new formalism is now introduced to define the adjacency relations. Then a new process is used to construct the boundaries of the object. The whole process is tested on 3D industrial meshes and bring a solution to recover B-Rep models.

  15. Data and Analysis Center for Software: An IAC in Transition.

    DTIC Science & Technology

    1983-06-01

    reviewed and is approved for publication. * APPROVEDt Proj ect Engineer . JOHN J. MARCINIAK, Colonel, USAF Chief, Command and Control Division . FOR THE CO...SUPPLEMENTARY NOTES RADC Project Engineer : John Palaimo (COEE) It. KEY WORDS (Conilnuo n rever*e aide if necessary and identify by block numober...Software Engineering Software Technology Information Analysis Center Database Scientific and Technical Information 20. ABSTRACT (Continue on reverse side It

  16. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging

    PubMed Central

    He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-01-01

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data. PMID:29112151

  17. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  18. Etravirine and Rilpivirine Drug Resistance Among HIV-1 Subtype C Infected Children Failing Non-Nucleoside Reverse Transcriptase Inhibitor-Based Regimens in South India.

    PubMed

    Saravanan, Shanmugam; Kausalya, Bagavathi; Gomathi, Selvamurthi; Sivamalar, Sathasivam; Pachamuthu, Balakrishnan; Selvamuthu, Poongulali; Pradeep, Amrose; Sunil, Solomon; Mothi, Sarvode N; Smith, Davey M; Kantor, Rami

    2017-06-01

    We have analyzed reverse transcriptase (RT) region of HIV-1 pol gene from 97 HIV-infected children who were identified as failing first-line therapy that included first-generation non-nucleoside RT inhibitors (Nevirapine and Efavirenz) for at least 6 months. We found that 54% and 65% of the children had genotypically predicted resistance to second-generation non-nucleoside RT inhibitors drugs Etravirine (ETR) and Rilpivirine, respectively. These cross-resistance mutations may compromise future NNRTI-based regimens, especially in resource-limited settings. To complement these investigations, we also analyzed the sequences in Stanford database, Monogram weighted score, and DUET weighted score algorithms for ETR susceptibility and found almost perfect agreement between the three algorithms in predicting ETR susceptibility from genotypic data.

  19. Cross-correlation least-squares reverse time migration in the pseudo-time domain

    NASA Astrophysics Data System (ADS)

    Li, Qingyang; Huang, Jianping; Li, Zhenchun

    2017-08-01

    The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.

  20. The Algorithm Theoretical Basis Document for Level 1A Processing

    NASA Technical Reports Server (NTRS)

    Jester, Peggy L.; Hancock, David W., III

    2012-01-01

    The first process of the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software converts the Level 0 data into the Level 1A Data Products. The Level 1A Data Products are the time ordered instrument data converted from counts to engineering units. This document defines the equations that convert the raw instrument data into engineering units. Required scale factors, bias values, and coefficients are defined in this document. Additionally, required quality assurance and browse products are defined in this document.

Top