Sample records for synthesis algorithm based

  1. New multirate sampled-data control law structure and synthesis algorithm

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng

    1992-01-01

    A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.

  2. AutoBayes Program Synthesis System System Internals

    NASA Technical Reports Server (NTRS)

    Schumann, Johann Martin

    2011-01-01

    This lecture combines the theoretical background of schema based program synthesis with the hands-on study of a powerful, open-source program synthesis system (Auto-Bayes). Schema-based program synthesis is a popular approach toward program synthesis. The lecture will provide an introduction into this topic and discuss how this technology can be used to generate customized algorithms. The synthesis of advanced numerical algorithms requires the availability of a powerful symbolic (algebra) system. Its task is to symbolically solve equations, simplify expressions, or to symbolically calculate derivatives (among others) such that the synthesized algorithms become as efficient as possible. We will discuss the use and importance of the symbolic system for synthesis. Any synthesis system is a large and complex piece of code. In this lecture, we will study Autobayes in detail. AutoBayes has been developed at NASA Ames and has been made open source. It takes a compact statistical specification and generates a customized data analysis algorithm (in C/C++) from it. AutoBayes is written in SWI Prolog and many concepts from rewriting, logic, functional, and symbolic programming. We will discuss the system architecture, the schema libary and the extensive support infra-structure. Practical hands-on experiments and exercises will enable the student to get insight into a realistic program synthesis system and provides knowledge to use, modify, and extend Autobayes.

  3. Multirate sampled-data yaw-damper and modal suppression system design

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1990-01-01

    A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.

  4. Moats and Drawbridges: An Isolation Primitive for Reconfigurable Hardware Based Systems

    DTIC Science & Technology

    2007-05-01

    these systems, and after being run through an optimizing CAD tool the resulting circuit is a single entangled mess of gates and wires. To prevent the...translates MATLAB [48] algorithms into HDL, logic synthesis translates this HDL into a netlist, a synthesis tool uses a place-and-route algorithm to...Core Soft Core µ Soft P Core µP Core Hard Soft Algorithms MATLAB gcc ExecutableC Code HDL C Code Bitstream Place and Route NetlistLogic Synthesis EDK µP

  5. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms

    PubMed Central

    Hernández-Ocaña, Betania; Pozos-Parra, Ma. Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem. PMID:27057156

  6. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms.

    PubMed

    Hernández-Ocaña, Betania; Pozos-Parra, Ma Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem.

  7. Synthesis of concentric circular antenna arrays using dragonfly algorithm

    NASA Astrophysics Data System (ADS)

    Babayigit, B.

    2018-05-01

    Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.

  8. Molecular beacon sequence design algorithm.

    PubMed

    Monroe, W Todd; Haselton, Frederick R

    2003-01-01

    A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.

  9. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  10. Exact Synthesis of Reversible Circuits Using A* Algorithm

    NASA Astrophysics Data System (ADS)

    Datta, K.; Rathi, G. K.; Sengupta, I.; Rahaman, H.

    2015-06-01

    With the growing emphasis on low-power design methodologies, and the result that theoretical zero power dissipation is possible only if computations are information lossless, design and synthesis of reversible logic circuits have become very important in recent years. Reversible logic circuits are also important in the context of quantum computing, where the basic operations are reversible in nature. Several synthesis methodologies for reversible circuits have been reported. Some of these methods are termed as exact, where the motivation is to get the minimum-gate realization for a given reversible function. These methods are computationally very intensive, and are able to synthesize only very small functions. There are other methods based on function transformations or higher-level representation of functions like binary decision diagrams or exclusive-or sum-of-products, that are able to handle much larger circuits without any guarantee of optimality or near-optimality. Design of exact synthesis algorithms is interesting in this context, because they set some kind of benchmarks against which other methods can be compared. This paper proposes an exact synthesis approach based on an iterative deepening version of the A* algorithm using the multiple-control Toffoli gate library. Experimental results are presented with comparisons with other exact and some heuristic based synthesis approaches.

  11. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  12. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  13. A Program Structure for Event-Based Speech Synthesis by Rules within a Flexible Segmental Framework.

    ERIC Educational Resources Information Center

    Hill, David R.

    1978-01-01

    A program structure based on recently developed techniques for operating system simulation has the required flexibility for use as a speech synthesis algorithm research framework. This program makes synthesis possible with less rigid time and frequency-component structure than simpler schemes. It also meets real-time operation and memory-size…

  14. Implementation of the block-Krylov boundary flexibility method of component synthesis

    NASA Technical Reports Server (NTRS)

    Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.

    1993-01-01

    A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.

  15. Spectrum synthesis for a spectrally tunable light source based on a DMD-convex grating Offner configuration

    NASA Astrophysics Data System (ADS)

    Ma, Suodong; Pan, Qiao; Shen, Weimin

    2016-09-01

    As one kind of light source simulation devices, spectrally tunable light sources are able to generate specific spectral shape and radiant intensity outputs according to different application requirements, which have urgent demands in many fields of the national economy and the national defense industry. Compared with the LED-type spectrally tunable light source, the one based on a DMD-convex grating Offner configuration has advantages of high spectral resolution, strong digital controllability, high spectrum synthesis accuracy, etc. As a key link of the above type light source to achieve target spectrum outputs, spectrum synthesis algorithm based on spectrum matching is therefore very important. An improved spectrum synthesis algorithm based on linear least square initialization and Levenberg-Marquardt iterative optimization is proposed in this paper on the basis of in-depth study of the spectrum matching principle. The effectiveness of the proposed method is verified by a series of simulations and experimental works.

  16. Film grain synthesis and its application to re-graining

    NASA Astrophysics Data System (ADS)

    Schallauer, Peter; Mörzinger, Roland

    2006-01-01

    Digital film restoration and special effects compositing require more and more automatic procedures for movie regraining. Missing or inhomogeneous grain decreases perceived quality. For the purpose of grain synthesis an existing texture synthesis algorithm has been evaluated and optimized. We show that this algorithm can produce synthetic grain which is perceptually similar to a given grain template, which has high spatial and temporal variation and which can be applied to multi-spectral images. Furthermore a re-grain application framework is proposed, which synthesises based on an input grain template artificial grain and composites this together with the original image content. Due to its modular approach this framework supports manual as well as automatic re-graining applications. Two example applications are presented, one for re-graining an entire movie and one for fully automatic re-graining of image regions produced by restoration algorithms. Low computational cost of the proposed algorithms allows application in industrial grade software.

  17. Digital Sound Synthesis Algorithms: a Tutorial Introduction and Comparison of Methods

    NASA Astrophysics Data System (ADS)

    Lee, J. Robert

    The objectives of the dissertation are to provide both a compendium of sound-synthesis methods with detailed descriptions and sound examples, as well as a comparison of the relative merits of each method based on ease of use, observed sound quality, execution time, and data storage requirements. The methods are classified under the general headings of wavetable-lookup synthesis, additive synthesis, subtractive synthesis, nonlinear methods, and physical modelling. The nonlinear methods comprise a large group that ranges from the well-known frequency-modulation synthesis to waveshaping. The final category explores computer modelling of real musical instruments and includes numerical and analytical solutions to the classical wave equation of motion, along with some of the more sophisticated time -domain models that are possible through the prudent combination of simpler synthesis techniques. The dissertation is intended to be understandable by a musician who is mathematically literate but who does not necessarily have a background in digital signal processing. With this limitation in mind, a brief and somewhat intuitive description of digital sampling theory is provided in the introduction. Other topics such as filter theory are discussed as the need arises. By employing each of the synthesis methods to produce the same type of sound, interesting comparisons can be made. For example, a struck string sound, such as that typical of a piano, can be produced by algorithms in each of the synthesis classifications. Many sounds, however, are peculiar to a single algorithm and must be examined independently. Psychoacoustic studies were conducted as an aid in the comparison of the sound quality of several implementations of the synthesis algorithms. Other psychoacoustic experiments were conducted to supplement the established notions of which timbral issues are important in the re -synthesis of the sounds of acoustic musical instruments.

  18. Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.

    PubMed

    Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian

    2011-01-13

    Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.

  19. Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm

    PubMed Central

    2010-01-01

    Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure−activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods. PMID:24900251

  20. Ant colony optimisation-direct cover: a hybrid ant colony direct cover technique for multi-level synthesis of multiple-valued logic functions

    NASA Astrophysics Data System (ADS)

    Abd-El-Barr, Mostafa

    2010-12-01

    The use of non-binary (multiple-valued) logic in the synthesis of digital systems can lead to savings in chip area. Advances in very large scale integration (VLSI) technology have enabled the successful implementation of multiple-valued logic (MVL) circuits. A number of heuristic algorithms for the synthesis of (near) minimal sum-of products (two-level) realisation of MVL functions have been reported in the literature. The direct cover (DC) technique is one such algorithm. The ant colony optimisation (ACO) algorithm is a meta-heuristic that uses constructive greediness to explore a large solution space in finding (near) optimal solutions. The ACO algorithm mimics the ant's behaviour in the real world in using the shortest path to reach food sources. We have previously introduced an ACO-based heuristic for the synthesis of two-level MVL functions. In this article, we introduce the ACO-DC hybrid technique for the synthesis of multi-level MVL functions. The basic idea is to use an ant to decompose a given MVL function into a number of levels and then synthesise each sub-function using a DC-based technique. The results obtained using the proposed approach are compared to those obtained using existing techniques reported in the literature. A benchmark set consisting of 50,000 randomly generated 2-variable 4-valued functions is used in the comparison. The results obtained using the proposed ACO-DC technique are shown to produce efficient realisation in terms of the average number of gates (as a measure of chip area) needed for the synthesis of a given MVL function.

  1. Visual difference metric for realistic image synthesis

    NASA Astrophysics Data System (ADS)

    Bolin, Mark R.; Meyer, Gary W.

    1999-05-01

    An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.

  2. A statistical-based scheduling algorithm in automated data path synthesis

    NASA Technical Reports Server (NTRS)

    Jeon, Byung Wook; Lursinsap, Chidchanok

    1992-01-01

    In this paper, we propose a new heuristic scheduling algorithm based on the statistical analysis of the cumulative frequency distribution of operations among control steps. It has a tendency of escaping from local minima and therefore reaching a globally optimal solution. The presented algorithm considers the real world constraints such as chained operations, multicycle operations, and pipelined data paths. The result of the experiment shows that it gives optimal solutions, even though it is greedy in nature.

  3. Decorating surfaces with bidirectional texture functions.

    PubMed

    Zhou, Kun; Du, Peng; Wang, Lifeng; Matsushita, Yasuyuki; Shi, Jiaoying; Guo, Baining; Shum, Heung-Yeung

    2005-01-01

    We present a system for decorating arbitrary surfaces with bidirectional texture functions (BTF). Our system generates BTFs in two steps. First, we automatically synthesize a BTF over the target surface from a given BTF sample. Then, we let the user interactively paint BTF patches onto the surface such that the painted patches seamlessly integrate with the background patterns. Our system is based on a patch-based texture synthesis approach known as quilting. We present a graphcut algorithm for BTF synthesis on surfaces and the algorithm works well for a wide variety of BTF samples, including those which present problems for existing algorithms. We also describe a graphcut texture painting algorithm for creating new surface imperfections (e.g., dirt, cracks, scratches) from existing imperfections found in input BTF samples. Using these algorithms, we can decorate surfaces with real-world textures that have spatially-variant reflectance, fine-scale geometry details, and surfaces imperfections. A particularly attractive feature of BTF painting is that it allows us to capture imperfections of real materials and paint them onto geometry models. We demonstrate the effectiveness of our system with examples.

  4. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  5. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  6. Machine-Vision Aids for Improved Flight Operations

    NASA Technical Reports Server (NTRS)

    Menon, P. K.; Chatterji, Gano B.

    1996-01-01

    The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.

  7. Heuristic-based scheduling algorithm for high level synthesis

    NASA Technical Reports Server (NTRS)

    Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye

    1992-01-01

    A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.

  8. Finding the K best synthesis plans.

    PubMed

    Fagerberg, Rolf; Flamm, Christoph; Kianian, Rojin; Merkle, Daniel; Stadler, Peter F

    2018-04-05

    In synthesis planning, the goal is to synthesize a target molecule from available starting materials, possibly optimizing costs such as price or environmental impact of the process. Current algorithmic approaches to synthesis planning are usually based on selecting a bond set and finding a single good plan among those induced by it. We demonstrate that synthesis planning can be phrased as a combinatorial optimization problem on hypergraphs by modeling individual synthesis plans as directed hyperpaths embedded in a hypergraph of reactions (HoR) representing the chemistry of interest. As a consequence, a polynomial time algorithm to find the K shortest hyperpaths can be used to compute the K best synthesis plans for a given target molecule. Having K good plans to choose from has many benefits: it makes the synthesis planning process much more robust when in later stages adding further chemical detail, it allows one to combine several notions of cost, and it provides a way to deal with imprecise yield estimates. A bond set gives rise to a HoR in a natural way. However, our modeling is not restricted to bond set based approaches-any set of known reactions and starting materials can be used to define a HoR. We also discuss classical quality measures for synthesis plans, such as overall yield and convergency, and demonstrate that convergency has a built-in inconsistency which could render its use in synthesis planning questionable. Decalin is used as an illustrative example of the use and implications of our results.

  9. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  10. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less

  11. Modal control theory and application to aircraft lateral handling qualities design

    NASA Technical Reports Server (NTRS)

    Srinathkumar, S.

    1978-01-01

    A multivariable synthesis procedure based on eigenvalue/eigenvector assignment is reviewed and is employed to develop a systematic design procedure to meet the lateral handling qualities design objectives of a fighter aircraft over a wide range of flight conditions. The closed loop modal characterization developed provides significant insight into the design process and plays a pivotal role in the synthesis of robust feedback systems. The simplicity of the synthesis algorithm yields an efficient computer aided interactive design tool for flight control system synthesis.

  12. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine.

    PubMed

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-02-06

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.

  13. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine

    PubMed Central

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-01-01

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human–machine–environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines. PMID:28178184

  14. Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.

  15. 2006 Interferometry Imaging Beauty Contest

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Ireland, Michael; Monnier, John D.; Thiebaut, Eric; Rengaswamy, Sridharan; Baron, Fabien; Young, John S.; Kraus, Stefan; hide

    2006-01-01

    We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Five different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formatted in the OI-FITS format. The data are calibrated power spectra and bispectra measured with an array intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.

  16. Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms

    DTIC Science & Technology

    1995-07-01

    would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general

  17. Synthesis design of artificial magnetic metamaterials using a genetic algorithm.

    PubMed

    Chen, P Y; Chen, C H; Wang, H; Tsai, J H; Ni, W X

    2008-08-18

    In this article, we present a genetic algorithm (GA) as one branch of artificial intelligence (AI) for the optimization-design of the artificial magnetic metamaterial whose structure is automatically generated by computer through the filling element methodology. A representative design example, metamaterials with permeability of negative unity, is investigated and the optimized structures found by the GA are presented. It is also demonstrated that our approach is effective for the synthesis of functional magnetic and electric metamaterials with optimal structures. This GA-based optimization-design technique shows great versatility and applicability in the design of functional metamaterials.

  18. A Reversible Logical Circuit Synthesis Algorithm Based on Decomposition of Cycle Representations of Permutations

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Li, Zhiqiang; Zhang, Gaoman; Pan, Suhan; Zhang, Wei

    2018-05-01

    A reversible function is isomorphic to a permutation and an arbitrary permutation can be represented by a series of cycles. A new synthesis algorithm for 3-qubit reversible circuits was presented. It consists of two parts, the first part used the Number of reversible function's Different Bits (NDBs) to decide whether the NOT gate should be added to decrease the Hamming distance of the input and output vectors; the second part was based on the idea of exploring properties of the cycle representation of permutations, decomposed the cycles to make the permutation closer to the identity permutation and finally turn into the identity permutation, it was realized by using totally controlled Toffoli gates with positive and negative controls.

  19. Comparison of some evolutionary algorithms for optimization of the path synthesis problem

    NASA Astrophysics Data System (ADS)

    Grabski, Jakub Krzysztof; Walczak, Tomasz; Buśkiewicz, Jacek; Michałowska, Martyna

    2018-01-01

    The paper presents comparison of the results obtained in a mechanism synthesis by means of some selected evolutionary algorithms. The optimization problem considered in the paper as an example is the dimensional synthesis of the path generating four-bar mechanism. In order to solve this problem, three different artificial intelligence algorithms are employed in this study.

  20. Vocoders and Speech Perception: Uses of Computer-Based Speech Analysis-Synthesis in Stimulus Generation.

    ERIC Educational Resources Information Center

    Tierney, Joseph; Mack, Molly

    1987-01-01

    Stimuli used in research on the perception of the speech signal have often been obtained from simple filtering and distortion of the speech waveform, sometimes accompanied by noise. However, for more complex stimulus generation, the parameters of speech can be manipulated, after analysis and before synthesis, using various types of algorithms to…

  1. Bridges between multiple-point geostatistics and texture synthesis: Review and guidelines for future research

    NASA Astrophysics Data System (ADS)

    Mariethoz, Gregoire; Lefebvre, Sylvain

    2014-05-01

    Multiple-Point Simulations (MPS) is a family of geostatistical tools that has received a lot of attention in recent years for the characterization of spatial phenomena in geosciences. It relies on the definition of training images to represent a given type of spatial variability, or texture. We show that the algorithmic tools used are similar in many ways to techniques developed in computer graphics, where there is a need to generate large amounts of realistic textures for applications such as video games and animated movies. Similarly to MPS, these texture synthesis methods use training images, or exemplars, to generate realistic-looking graphical textures. Both domains of multiple-point geostatistics and example-based texture synthesis present similarities in their historic development and share similar concepts. These disciplines have however remained separated, and as a result significant algorithmic innovations in each discipline have not been universally adopted. Texture synthesis algorithms present drastically increased computational efficiency, patterns reproduction and user control. At the same time, MPS developed ways to condition models to spatial data and to produce 3D stochastic realizations, which have not been thoroughly investigated in the field of texture synthesis. In this paper we review the possible links between these disciplines and show the potential and limitations of using concepts and approaches from texture synthesis in MPS. We also provide guidelines on how recent developments could benefit both fields of research, and what challenges remain open.

  2. Synthesis of an optoelectronic system for tracking (OEST) the information track of the optical record carrier based on the acceleration control principle

    NASA Astrophysics Data System (ADS)

    Zalogin, Stanislav M.; Zalogin, M. S.

    1997-02-01

    The problem for construction of control algorithm in OEST the information track of the optical record carrier the realization of which is based on the use of accelerations is considered. Such control algorithms render the designed system the properties of adaptability, feeble sensitivity to the system parameter change and the action of disturbing forces what gives known advantages to information carriers with such system under operation in hard climate conditions as well as at maladjustment, workpieces wear and change of friction in the system. In the paper are investigated dynamic characteristics of a closed OEST, it is shown, that the designed stable system with given quality indices is a high-precision one. The validated recommendations as to design of control algorithms parameters are confirmed by results of mathematical simulation of controlled processes. The proposed methods for OEST synthesis on the basis of the control acceleration principle can be recommended for the use at industrial production of optical information record carriers.

  3. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  4. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  5. Synthesis of Optimal Constant-Gain Positive-Real Controllers for Passive Systems

    NASA Technical Reports Server (NTRS)

    Mao, Y.; Kelkar, A. G.; Joshi, S. M.

    1999-01-01

    This paper presents synthesis methods for the design of constant-gain positive real controllers for passive systems. The results presented in this paper, in conjunction with the previous work by the authors on passification of non-passive systems, offer a useful synthesis tool for the design of passivity-based robust controllers for non-passive systems as well. Two synthesis approaches are given for minimizing an LQ-type performance index, resulting in optimal controller gains. Two separate algorithms, one for each of these approaches, are given. The synthesis techniques are demonstrated using two numerical examples: control of a flexible structure and longitudinal control of a fighter aircraft.

  6. Synthesis of Multispectral Bands from Hyperspectral Data: Validation Based on Images Acquired by AVIRIS, Hyperion, ALI, and ETM+

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir; Glasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki

    2003-01-01

    Spectral band synthesis is a key step in the process of creating a simulated multispectral image from hyperspectral data. In this step, narrow hyperspectral bands are combined into broader multispectral bands. Such an approach has been used quite often, but to the best of our knowledge accuracy of the band synthesis simulations has not been evaluated thus far. Therefore, the main goal of this paper is to provide validation of the spectral band synthesis algorithm used in the ART software. The next section contains a description of the algorithm and an example of its application. Using spectral responses of AVIRIS, Hyperion, ALI, and ETM+, the following section shows how the synthesized spectral bands compare with actual bands, and it presents an evaluation of the simulation accuracy based on results of MODTRAN modeling. In the final sections of the paper, simulated images are compared with data acquired by actual satellite sensors. First, a Landsat 7 ETM+ image is simulated using an AVIRIS hyperspectral data cube. Then, two datasets collected with the Hyperion instrument from the EO-1 satellite are used to simulate multispectral images from the ALI and ETM+ sensors.

  7. Sample-based engine noise synthesis using an enhanced pitch-synchronous overlap-and-add method.

    PubMed

    Jagla, Jan; Maillard, Julien; Martin, Nadine

    2012-11-01

    An algorithm for the real time synthesis of internal combustion engine noise is presented. Through the analysis of a recorded engine noise signal of continuously varying engine speed, a dataset of sound samples is extracted allowing the real time synthesis of the noise induced by arbitrary evolutions of engine speed. The sound samples are extracted from a recording spanning the entire engine speed range. Each sample is delimitated such as to contain the sound emitted during one cycle of the engine plus the necessary overlap to ensure smooth transitions during the synthesis. The proposed approach, an extension of the PSOLA method introduced for speech processing, takes advantage of the specific periodicity of engine noise signals to locate the extraction instants of the sound samples. During the synthesis stage, the sound samples corresponding to the target engine speed evolution are concatenated with an overlap and add algorithm. It is shown that this method produces high quality audio restitution with a low computational load. It is therefore well suited for real time applications.

  8. Approximation algorithms for the min-power symmetric connectivity problem

    NASA Astrophysics Data System (ADS)

    Plotnikov, Roman; Erzin, Adil; Mladenovic, Nenad

    2016-10-01

    We consider the NP-hard problem of synthesis of optimal spanning communication subgraph in a given arbitrary simple edge-weighted graph. This problem occurs in the wireless networks while minimizing the total transmission power consumptions. We propose several new heuristics based on the variable neighborhood search metaheuristic for the approximation solution of the problem. We have performed a numerical experiment where all proposed algorithms have been executed on the randomly generated test samples. For these instances, on average, our algorithms outperform the previously known heuristics.

  9. An Interferometry Imaging Beauty Contest

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Monnier, John D.; Zhaod, Ming; Young, John S.; Thorsteinsson, Hrobjartur; Meimon, Serge C.; Mugnier, Laurent; LeBesnerais, Guy; hide

    2004-01-01

    We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Six different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formated in the interferometry Data Exchange Standard and is designed to simulate a specific problem relevant to long-baseline imaging. The data are calibrated power spectra and bispectra measured with a ctitious array, intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.

  10. Synthesis of blind source separation algorithms on reconfigurable FPGA platforms

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Szu, Harold H.

    2005-03-01

    Recent advances in intelligence technology have boosted the development of micro- Unmanned Air Vehicles (UAVs) including Sliver Fox, Shadow, and Scan Eagle for various surveillance and reconnaissance applications. These affordable and reusable devices have to fit a series of size, weight, and power constraints. Cameras used on such micro-UAVs are therefore mounted directly at a fixed angle without any motion-compensated gimbals. This mounting scheme has resulted in the so-called jitter effect in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum a-posteriori probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) using standard-height cells. ICA is an algorithm that can solve BSS problems by carrying out the all-order statistical, decorrelation-based transforms, in which an assumption that neighborhood pixels share the same but unknown mixing matrix A is made. In this paper, we continue our investigation on the design challenges of firmware approaches to smart algorithms. We think two levels of parallelization can be explored, including pixel-based parallelization and the parallelization of the restoration algorithm performed at each pixel. This paper focuses on the latter and we use ICA as an example to explain the design and implementation methods. It is well known that the capacity constraints of single FPGA have limited the implementation of many complex algorithms including ICA. Using the reconfigurability of FPGA, we show, in this paper, how to manipulate the FPGA-based system to provide extra computing power for the parallelized ICA algorithm with limited FPGA resources. The synthesis aiming at the pilchard re-configurable FPGA platform is reported. The pilchard board is embedded with single Xilinx VIRTEX 1000E FPGA and transfers data directly to CPU on the 64-bit memory bus at the maximum frequency of 133MHz. Both the feasibility performance evaluations and experimental results validate the effectiveness and practicality of this synthesis, which can be extended to the spatial-variant jitter restoration for micro-UAV deployment.

  11. Patch Based Synthesis of Whole Head MR Images: Application to EPI Distortion Correction.

    PubMed

    Roy, Snehashis; Chou, Yi-Yu; Jog, Amod; Butman, John A; Pham, Dzung L

    2016-10-01

    Different magnetic resonance imaging pulse sequences are used to generate image contrasts based on physical properties of tissues, which provide different and often complementary information about them. Therefore multiple image contrasts are useful for multimodal analysis of medical images. Often, medical image processing algorithms are optimized for particular image contrasts. If a desirable contrast is unavailable, contrast synthesis (or modality synthesis) methods try to "synthesize" the unavailable constrasts from the available ones. Most of the recent image synthesis methods generate synthetic brain images, while whole head magnetic resonance (MR) images can also be useful for many applications. We propose an atlas based patch matching algorithm to synthesize T 2 -w whole head (including brain, skull, eyes etc) images from T 1 -w images for the purpose of distortion correction of diffusion weighted MR images. The geometric distortion in diffusion MR images due to in-homogeneous B 0 magnetic field are often corrected by non-linearly registering the corresponding b = 0 image with zero diffusion gradient to an undistorted T 2 -w image. We show that our synthetic T 2 -w images can be used as a template in absence of a real T 2 -w image. Our patch based method requires multiple atlases with T 1 and T 2 to be registeLowRes to a given target T 1 . Then for every patch on the target, multiple similar looking matching patches are found on the atlas T 1 images and corresponding patches on the atlas T 2 images are combined to generate a synthetic T 2 of the target. We experimented on image data obtained from 44 patients with traumatic brain injury (TBI), and showed that our synthesized T 2 images produce more accurate distortion correction than a state-of-the-art registration based image synthesis method.

  12. Application of morphological synthesis for understanding electrode microstructure evolution as a function of applied charge/discharge cycles

    DOE PAGES

    Glazoff, Michael V.; Dufek, Eric J.; Shalashnikov, Egor V.

    2016-09-15

    Morphological analysis and synthesis operations were employed for analysis of electrode microstructure transformations and evolution accompanying the application of charge/discharge cycles to electrochemical storage systems (batteries). Using state-of-the-art morphological algorithms, it was possible to predict microstructure evolution in porous Si electrodes for Li-ion batteries with sufficient accuracy. Algorithms for image analyses (segmentation, feature extraction, and 3D-reconstructions using 2D-images) were also developed. Altogether, these techniques could be considered supplementary to phase-field mesoscopic approach to microstructure evolution that is based upon clear and definitive changes in the appearance of microstructure. However, unlike in phase-field, the governing equations for morphological approach are geometry-,more » not physics-based. Similar non-physics based approach to understanding different phenomena was attempted with the introduction of cellular automata. It is anticipated that morphological synthesis and analysis will represent a useful supplementary tool to phase-field and will render assistance to unraveling the underlying microstructure-property relationships. The paper contains data on electrochemical characterization of different electrode materials that was conducted in parallel to morphological study.« less

  13. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  14. Automatic Program Synthesis Reports.

    ERIC Educational Resources Information Center

    Biermann, A. W.; And Others

    Some of the major results of future goals of an automatic program synthesis project are described in the two papers that comprise this document. The first paper gives a detailed algorithm for synthesizing a computer program from a trace of its behavior. Since the algorithm involves a search, the length of time required to do the synthesis of…

  15. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  16. Accurate visible speech synthesis based on concatenating variable length motion capture data.

    PubMed

    Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara

    2006-01-01

    We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.

  17. Blind separation of overlapping partials in harmonic musical notes using amplitude and phase reconstruction

    NASA Astrophysics Data System (ADS)

    de León, Jesús Ponce; Beltrán, José Ramón

    2012-12-01

    In this study, a new method of blind audio source separation (BASS) of monaural musical harmonic notes is presented. The input (mixed notes) signal is processed using a flexible analysis and synthesis algorithm (complex wavelet additive synthesis, CWAS), which is based on the complex continuous wavelet transform. When the harmonics from two or more sources overlap in a certain frequency band (or group of bands), a new technique based on amplitude similarity criteria is used to obtain an approximation to the original partial information. The aim is to show that the CWAS algorithm can be a powerful tool in BASS. Compared with other existing techniques, the main advantages of the proposed algorithm are its accuracy in the instantaneous phase estimation, its synthesis capability and that the only input information needed is the mixed signal itself. A set of synthetically mixed monaural isolated notes have been analyzed using this method, in eight different experiments: the same instrument playing two notes within the same octave and two harmonically related notes (5th and 12th intervals), two different musical instruments playing 5th and 12th intervals, two different instruments playing non-harmonic notes, major and minor chords played by the same musical instrument, three different instruments playing non-harmonically related notes and finally the mixture of a inharmonic instrument (piano) and one harmonic instrument. The results obtained show the strength of the technique.

  18. Data-driven grasp synthesis using shape matching and task-based pruning.

    PubMed

    Li, Ying; Fu, Jiaxin L; Pollard, Nancy S

    2007-01-01

    Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.

  19. Transitioning from Software Requirements Models to Design Models

    NASA Technical Reports Server (NTRS)

    Lowry, Michael (Technical Monitor); Whittle, Jon

    2003-01-01

    Summary: 1. Proof-of-concept of state machine synthesis from scenarios - CTAS case study. 2. CTAS team wants to use the syntheses algorithm to validate trajectory generation. 3. Extending synthesis algorithm towards requirements validation: (a) scenario relationships' (b) methodology for generalizing/refining scenarios, and (c) interaction patterns to control synthesis. 4. Initial ideas tested on conflict detection scenarios.

  20. An efficient rhythmic component expression and weighting synthesis strategy for classifying motor imagery EEG in a brain computer interface

    NASA Astrophysics Data System (ADS)

    Wang, Tao; He, Bin

    2004-03-01

    The recognition of mental states during motor imagery tasks is crucial for EEG-based brain computer interface research. We have developed a new algorithm by means of frequency decomposition and weighting synthesis strategy for recognizing imagined right- and left-hand movements. A frequency range from 5 to 25 Hz was divided into 20 band bins for each trial, and the corresponding envelopes of filtered EEG signals for each trial were extracted as a measure of instantaneous power at each frequency band. The dimensionality of the feature space was reduced from 200 (corresponding to 2 s) to 3 by down-sampling of envelopes of the feature signals, and subsequently applying principal component analysis. The linear discriminate analysis algorithm was then used to classify the features, due to its generalization capability. Each frequency band bin was weighted by a function determined according to the classification accuracy during the training process. The present classification algorithm was applied to a dataset of nine human subjects, and achieved a success rate of classification of 90% in training and 77% in testing. The present promising results suggest that the present classification algorithm can be used in initiating a general-purpose mental state recognition based on motor imagery tasks.

  1. Synthesizer: Expediting synthesis studies from context-free data with information retrieval techniques.

    PubMed

    Gandy, Lisa M; Gumm, Jordan; Fertig, Benjamin; Thessen, Anne; Kennish, Michael J; Chavan, Sameer; Marchionni, Luigi; Xia, Xiaoxin; Shankrit, Shambhavi; Fertig, Elana J

    2017-01-01

    Scientists have unprecedented access to a wide variety of high-quality datasets. These datasets, which are often independently curated, commonly use unstructured spreadsheets to store their data. Standardized annotations are essential to perform synthesis studies across investigators, but are often not used in practice. Therefore, accurately combining records in spreadsheets from differing studies requires tedious and error-prone human curation. These efforts result in a significant time and cost barrier to synthesis research. We propose an information retrieval inspired algorithm, Synthesize, that merges unstructured data automatically based on both column labels and values. Application of the Synthesize algorithm to cancer and ecological datasets had high accuracy (on the order of 85-100%). We further implement Synthesize in an open source web application, Synthesizer (https://github.com/lisagandy/synthesizer). The software accepts input as spreadsheets in comma separated value (CSV) format, visualizes the merged data, and outputs the results as a new spreadsheet. Synthesizer includes an easy to use graphical user interface, which enables the user to finish combining data and obtain perfect accuracy. Future work will allow detection of units to automatically merge continuous data and application of the algorithm to other data formats, including databases.

  2. Synthesizer: Expediting synthesis studies from context-free data with information retrieval techniques

    PubMed Central

    Gumm, Jordan; Fertig, Benjamin; Thessen, Anne; Kennish, Michael J.; Chavan, Sameer; Marchionni, Luigi; Xia, Xiaoxin; Shankrit, Shambhavi; Fertig, Elana J.

    2017-01-01

    Scientists have unprecedented access to a wide variety of high-quality datasets. These datasets, which are often independently curated, commonly use unstructured spreadsheets to store their data. Standardized annotations are essential to perform synthesis studies across investigators, but are often not used in practice. Therefore, accurately combining records in spreadsheets from differing studies requires tedious and error-prone human curation. These efforts result in a significant time and cost barrier to synthesis research. We propose an information retrieval inspired algorithm, Synthesize, that merges unstructured data automatically based on both column labels and values. Application of the Synthesize algorithm to cancer and ecological datasets had high accuracy (on the order of 85–100%). We further implement Synthesize in an open source web application, Synthesizer (https://github.com/lisagandy/synthesizer). The software accepts input as spreadsheets in comma separated value (CSV) format, visualizes the merged data, and outputs the results as a new spreadsheet. Synthesizer includes an easy to use graphical user interface, which enables the user to finish combining data and obtain perfect accuracy. Future work will allow detection of units to automatically merge continuous data and application of the algorithm to other data formats, including databases. PMID:28437440

  3. Application of modern control theory to the design of optimum aircraft controllers

    NASA Technical Reports Server (NTRS)

    Power, L. J.

    1973-01-01

    The synthesis procedure presented is based on the solution of the output regulator problem of linear optimal control theory for time-invariant systems. By this technique, solution of the matrix Riccati equation leads to a constant linear feedback control law for an output regulator which will maintain a plant in a particular equilibrium condition in the presence of impulse disturbances. Two simple algorithms are presented that can be used in an automatic synthesis procedure for the design of maneuverable output regulators requiring only selected state variables for feedback. The first algorithm is for the construction of optimal feedforward control laws that can be superimposed upon a Kalman output regulator and that will drive the output of a plant to a desired constant value on command. The second algorithm is for the construction of optimal Luenberger observers that can be used to obtain feedback control laws for the output regulator requiring measurement of only part of the state vector. This algorithm constructs observers which have minimum response time under the constraint that the magnitude of the gains in the observer filter be less than some arbitrary limit.

  4. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.

  5. Antenna analysis using neural networks

    NASA Astrophysics Data System (ADS)

    Smith, William T.

    1992-09-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).

  6. Synthesis and evaluation of phase detectors for active bit synchronizers

    NASA Technical Reports Server (NTRS)

    Mcbride, A. L.

    1974-01-01

    Self-synchronizing digital data communication systems usually use active or phase-locked loop (PLL) bit synchronizers. The three main elements of PLL synchronizers are the phase detector, loop filter, and the voltage controlled oscillator. Of these three elements, phase detector synthesis is the main source of difficulty, particularly when the received signals are demodulated square-wave signals. A phase detector synthesis technique is reviewed that provides a physically realizable design for bit synchronizer phase detectors. The development is based upon nonlinear recursive estimation methods. The phase detector portion of the algorithm is isolated and analyzed.

  7. Flight Path Synthesis and HUD Scaling for V/STOL Terminal Area Operations

    DOT National Transportation Integrated Search

    1995-04-01

    A two circle horizontal flightpath synthesis algorithm for Vertical/Short : Takeoff and Landing (V/STOL) terminal area operations is presented. This : algorithm provides a flight-path that is tangential to the aircraft's velocity : vector at the inst...

  8. Multivariable optimization of an auto-thermal ammonia synthesis reactor using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Anh-Nga, Nguyen T.; Tuan-Anh, Nguyen; Tien-Dung, Vu; Kim-Trung, Nguyen

    2017-09-01

    The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone. The optimal problem requires the maximization of a multivariable objective function which subjects to a number of equality constraints involving the solution of coupled differential equations and also inequality constraints. The solution of an optimization problem can be found through, among others, deterministic or stochastic approaches. The stochastic methods, such as evolutionary algorithm (EA), which is based on natural phenomenon, can overcome the drawbacks such as the requirement of the derivatives of the objective function and/or constraints, or being not efficient in non-differentiable or discontinuous problems. Genetic algorithm (GA) which is a class of EA, exceptionally simple, robust at numerical optimization and is more likely to find a true global optimum. In this study, the genetic algorithm is employed to find the optimum profit of the process. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results showed that the presented numerical method could be applied to model the ammonia synthesis reactor. The optimum economic profit obtained from this study are also compared to the results from the literature. It suggests that the process should be operated at higher temperature of feed gas in catalyst zone and the reactor length is slightly longer.

  9. Python based high-level synthesis compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard

    2014-11-01

    This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.

  10. RPython high-level synthesis

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Linczuk, Maciej

    2016-09-01

    The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.

  11. Proposal for Development of EBM-CDSS (Evidence-based Clinical Decision Support System) to Aid Prognostication in Terminally Ill Patients

    DTIC Science & Technology

    2012-10-01

    critically ill patients using artificial neural network synthesised by genetic algorithm,” The Lancet, vol. 347, (no. 9009), pp. 1146-1150, 1996...criterion (Table 2). Statistical Analysis Annual progress report page 46 Data synthesis was conducted according to the study design separately as... synthesis of results across included studies was not performed in the study by Detterbeck and Gibson4 which was undertaken in our study. Another unique

  12. Analysis of Air Traffic Track Data with the AutoBayes Synthesis System

    NASA Technical Reports Server (NTRS)

    Schumann, Johann Martin Philip; Cate, Karen; Lee, Alan G.

    2010-01-01

    The Next Generation Air Traffic System (NGATS) is aiming to provide substantial computer support for the air traffic controllers. Algorithms for the accurate prediction of aircraft movements are of central importance for such software systems but trajectory prediction has to work reliably in the presence of unknown parameters and uncertainties. We are using the AutoBayes program synthesis system to generate customized data analysis algorithms that process large sets of aircraft radar track data in order to estimate parameters and uncertainties. In this paper, we present, how the tasks of finding structure in track data, estimation of important parameters in climb trajectories, and the detection of continuous descent approaches can be accomplished with compact task-specific AutoBayes specifications. We present an overview of the AutoBayes architecture and describe, how its schema-based approach generates customized analysis algorithms, documented C/C++ code, and detailed mathematical derivations. Results of experiments with actual air traffic control data are discussed.

  13. Optimization-Based Robust Nonlinear Control

    DTIC Science & Technology

    2006-08-01

    ABSTRACT New control algorithms were developed for robust stabilization of nonlinear dynamical systems . Novel, linear matrix inequality-based synthesis...was to further advance optimization-based robust nonlinear control design, for general nonlinear systems (especially in discrete time ), for linear...Teel, IEEE Transactions on Control Systems Technology, vol. 14, no. 3, p. 398-407, May 2006. 3. "A unified framework for input-to-state stability in

  14. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  15. Multi exposure image fusion algorithm based on YCbCr space

    NASA Astrophysics Data System (ADS)

    Yang, T. T.; Fang, P. Y.

    2018-05-01

    To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.

  16. On consistent inter-view synthesis for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Tran, Lam C.; Bal, Can; Pal, Christopher J.; Nguyen, Truong Q.

    2012-03-01

    In this paper we present a novel stereo view synthesis algorithm that is highly accurate with respect to inter-view consistency, thus to enabling stereo contents to be viewed on the autostereoscopic displays. The algorithm finds identical occluded regions within each virtual view and aligns them together to extract a surrounding background layer. The background layer for each occluded region is then used with an exemplar based inpainting method to synthesize all virtual views simultaneously. Our algorithm requires the alignment and extraction of background layers for each occluded region; however, these two steps are done efficiently with lower computational complexity in comparison to previous approaches using the exemplar based inpainting algorithms. Thus, it is more efficient than existing algorithms that synthesize one virtual view at a time. This paper also describes the implementation of a simplified GPU accelerated version of the approach and its implementation in CUDA. Our CUDA method has sublinear complexity in terms of the number of views that need to be generated, which makes it especially useful for generating content for autostereoscopic displays that require many views to operate. An objective of our work is to allow the user to change depth and viewing perspective on the fly. Therefore, to further accelerate the CUDA variant of our approach, we present a modified version of our method to warp the background pixels from reference views to a middle view to recover background pixels. We then use an exemplar based inpainting method to fill in the occluded regions. We use warping of the foreground from the reference images and background from the filled regions to synthesize new virtual views on the fly. Our experimental results indicate that the simplified CUDA implementation decreases running time by orders of magnitude with negligible loss in quality. [Figure not available: see fulltext.

  17. Automated detection of hospital outbreaks: A systematic review of methods.

    PubMed

    Leclère, Brice; Buckeridge, David L; Boëlle, Pierre-Yves; Astagneau, Pascal; Lepelletier, Didier

    2017-01-01

    Several automated algorithms for epidemiological surveillance in hospitals have been proposed. However, the usefulness of these methods to detect nosocomial outbreaks remains unclear. The goal of this review was to describe outbreak detection algorithms that have been tested within hospitals, consider how they were evaluated, and synthesize their results. We developed a search query using keywords associated with hospital outbreak detection and searched the MEDLINE database. To ensure the highest sensitivity, no limitations were initially imposed on publication languages and dates, although we subsequently excluded studies published before 2000. Every study that described a method to detect outbreaks within hospitals was included, without any exclusion based on study design. Additional studies were identified through citations in retrieved studies. Twenty-nine studies were included. The detection algorithms were grouped into 5 categories: simple thresholds (n = 6), statistical process control (n = 12), scan statistics (n = 6), traditional statistical models (n = 6), and data mining methods (n = 4). The evaluation of the algorithms was often solely descriptive (n = 15), but more complex epidemiological criteria were also investigated (n = 10). The performance measures varied widely between studies: e.g., the sensitivity of an algorithm in a real world setting could vary between 17 and 100%. Even if outbreak detection algorithms are useful complementary tools for traditional surveillance, the heterogeneity in results among published studies does not support quantitative synthesis of their performance. A standardized framework should be followed when evaluating outbreak detection methods to allow comparison of algorithms across studies and synthesis of results.

  18. Synthesis of Algorithm for Range Measurement Equipment to Track Maneuvering Aircraft Using Data on Its Dynamic and Kinematic Parameters

    NASA Astrophysics Data System (ADS)

    Pudovkin, A. P.; Panasyuk, Yu N.; Danilov, S. N.; Moskvitin, S. P.

    2018-05-01

    The problem of improving automated air traffic control systems is considered through the example of the operation algorithm synthesis for a range measurement channel to track the aircraft, using its kinematic and dynamic parameters. The choice of the state and observation models has been justified, the computer simulations have been performed and the results of the investigated algorithms have been obtained.

  19. Investigation of High-Level Synthesis tools’ applicability to data acquisition systems design based on the CMS ECAL Data Concentrator Card example

    NASA Astrophysics Data System (ADS)

    HUSEJKO, Michal; EVANS, John; RASTEIRO DA SILVA, Jose Carlos

    2015-12-01

    High-Level Synthesis (HLS) for Field-Programmable Logic Array (FPGA) programming is becoming a practical alternative to well-established VHDL and Verilog languages. This paper describes a case study in the use of HLS tools to design FPGA-based data acquisition systems (DAQ). We will present the implementation of the CERN CMS detector ECAL Data Concentrator Card (DCC) functionality in HLS and lessons learned from using HLS design flow. The DCC functionality and a definition of the initial system-level performance requirements (latency, bandwidth, and throughput) will be presented. We will describe how its packet processing control centric algorithm was implemented with VHDL and Verilog languages. We will then show how the HLS flow could speed up design-space exploration by providing loose coupling between functions interface design and functions algorithm implementation. We conclude with results of real-life hardware tests performed with the HLS flow-generated design with a DCC Tester system.

  20. An integrated science-based methodology to assess potential risks and implications of engineered nanomaterials.

    PubMed

    Tolaymat, Thabet; El Badawy, Amro; Sequeira, Reynold; Genaidy, Ash

    2015-11-15

    There is an urgent need for broad and integrated studies that address the risks of engineered nanomaterials (ENMs) along the different endpoints of the society, environment, and economy (SEE) complex adaptive system. This article presents an integrated science-based methodology to assess the potential risks of engineered nanomaterials. To achieve the study objective, two major tasks are accomplished, knowledge synthesis and algorithmic computational methodology. The knowledge synthesis task is designed to capture "what is known" and to outline the gaps in knowledge from ENMs risk perspective. The algorithmic computational methodology is geared toward the provision of decisions and an understanding of the risks of ENMs along different endpoints for the constituents of the SEE complex adaptive system. The approach presented herein allows for addressing the formidable task of assessing the implications and risks of exposure to ENMs, with the long term goal to build a decision-support system to guide key stakeholders in the SEE system towards building sustainable ENMs and nano-enabled products. Published by Elsevier B.V.

  1. Saliency detection algorithm based on LSC-RC

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu

    2018-02-01

    Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.

  2. Probabilistic pathway construction.

    PubMed

    Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha

    2011-07-01

    Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. OPTICAL INFORMATION PROCESSING: Synthesis of an object recognition system based on the profile of the envelope of a laser pulse in pulsed lidars

    NASA Astrophysics Data System (ADS)

    Buryi, E. V.

    1998-05-01

    The main problems in the synthesis of an object recognition system, based on the principles of operation of neuron networks, are considered. Advantages are demonstrated of a hierarchical structure of the recognition algorithm. The use of reading of the amplitude spectrum of signals as information tags is justified and a method is developed for determination of the dimensionality of the tag space. Methods are suggested for ensuring the stability of object recognition in the optical range. It is concluded that it should be possible to recognise perspectives of complex objects.

  4. Synthesis of the adaptive continuous system for the multi-axle wheeled vehicle body oscillation damping

    NASA Astrophysics Data System (ADS)

    Zhileykin, M. M.; Kotiev, G. O.; Nagatsev, M. V.

    2018-02-01

    In order to meet the growing mobility requirements for the wheeled vehicles on all types of terrain the engineers have to develop a large number of specialized control algorithms for the multi-axle wheeled vehicle (MWV) suspension improving such qualities as ride comfort, handling and stability. The authors have developed an adaptive algorithm of the dynamic damping of the MVW body oscillations. The algorithm provides high ride comfort and high mobility of the vehicle. The article discloses a method for synthesis of an adaptive dynamic continuous algorithm of the MVW body oscillation damping and provides simulation results proving high efficiency of the developed control algorithm.

  5. Technology transfer by means of fault tree synthesis

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2012-12-01

    Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.

  6. Regular Mechanical Transformation of Rotations Into Translations: Part 2. Kinematic Synthesis of the Elements of High Kinematic Joints, Realizing the Process of Motions Transformation

    NASA Astrophysics Data System (ADS)

    Abadjieva, Emilia; Abadjiev, Valentin

    2017-09-01

    This work is developed on the basis of the illustrated main parts of the kinematic theory (theory of gearing) of the spatial rack drives in Part 1 of this study. The applied theoretical approach to their synthesis, based on the T. Olivier's second principle is defined. A study of the geometric nature of the surface of action (mesh region, respectively) of these class transmissions is shown. Research software programs for synthesis and visualization of these transmissions and their specific elements are elaborated, on the basis of the given algorithms to the synthesis of the elements of high kinematic joints (active tooth surfaces), with which the movable links of the studied gear systems are equipped.

  7. Improvements in Block-Krylov Ritz Vectors and the Boundary Flexibility Method of Component Synthesis

    NASA Technical Reports Server (NTRS)

    Carney, Kelly Scott

    1997-01-01

    A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, proposed by Wilson, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based upon the boundary flexibility vectors of the component. Improvements have been made in the formulation of the initial seed to the Krylov sequence, through the use of block-filtering. A method to shift the Krylov sequence to create Ritz vectors that will represent the dynamic behavior of the component at target frequencies, the target frequency being determined by the applied forcing functions, has been developed. A method to terminate the Krylov sequence has also been developed. Various orthonormalization schemes have been developed and evaluated, including the Cholesky/QR method. Several auxiliary theorems and proofs which illustrate issues in component mode synthesis and loss of orthogonality in the Krylov sequence have also been presented. The resulting methodology is applicable to both fixed and free- interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. The accuracy is found to be comparable to that of component synthesis based upon normal modes, using fewer generalized coordinates. In addition, the block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem. The requirement for less vectors to form the component, coupled with the lower computational expense of calculating these Ritz vectors, combine to create a method more efficient than traditional component mode synthesis.

  8. Automated detection of hospital outbreaks: A systematic review of methods

    PubMed Central

    Buckeridge, David L.; Lepelletier, Didier

    2017-01-01

    Objectives Several automated algorithms for epidemiological surveillance in hospitals have been proposed. However, the usefulness of these methods to detect nosocomial outbreaks remains unclear. The goal of this review was to describe outbreak detection algorithms that have been tested within hospitals, consider how they were evaluated, and synthesize their results. Methods We developed a search query using keywords associated with hospital outbreak detection and searched the MEDLINE database. To ensure the highest sensitivity, no limitations were initially imposed on publication languages and dates, although we subsequently excluded studies published before 2000. Every study that described a method to detect outbreaks within hospitals was included, without any exclusion based on study design. Additional studies were identified through citations in retrieved studies. Results Twenty-nine studies were included. The detection algorithms were grouped into 5 categories: simple thresholds (n = 6), statistical process control (n = 12), scan statistics (n = 6), traditional statistical models (n = 6), and data mining methods (n = 4). The evaluation of the algorithms was often solely descriptive (n = 15), but more complex epidemiological criteria were also investigated (n = 10). The performance measures varied widely between studies: e.g., the sensitivity of an algorithm in a real world setting could vary between 17 and 100%. Conclusion Even if outbreak detection algorithms are useful complementary tools for traditional surveillance, the heterogeneity in results among published studies does not support quantitative synthesis of their performance. A standardized framework should be followed when evaluating outbreak detection methods to allow comparison of algorithms across studies and synthesis of results. PMID:28441422

  9. Optimal aperture synthesis radar imaging

    NASA Astrophysics Data System (ADS)

    Hysell, D. L.; Chau, J. L.

    2006-03-01

    Aperture synthesis radar imaging has been used to investigate coherent backscatter from ionospheric plasma irregularities at Jicamarca and elsewhere for several years. Phenomena of interest include equatorial spread F, 150-km echoes, the equatorial electrojet, range-spread meteor trails, and mesospheric echoes. The sought-after images are related to spaced-receiver data mathematically through an integral transform, but direct inversion is generally impractical or suboptimal. We instead turn to statistical inverse theory, endeavoring to utilize fully all available information in the data inversion. The imaging algorithm used at Jicamarca is based on an implementation of the MaxEnt method developed for radio astronomy. Its strategy is to limit the space of candidate images to those that are positive definite, consistent with data to the degree required by experimental confidence limits; smooth (in some sense); and most representative of the class of possible solutions. The algorithm was improved recently by (1) incorporating the antenna radiation pattern in the prior probability and (2) estimating and including the full error covariance matrix in the constraints. The revised algorithm is evaluated using new 28-baseline electrojet data from Jicamarca.

  10. First imagery generated by near-field real-time aperture synthesis passive millimetre wave imagers at 94 GHz and 183 GHz

    NASA Astrophysics Data System (ADS)

    Salmon, Neil A.; Mason, Ian; Wilkinson, Peter; Taylor, Chris; Scicluna, Peter

    2010-10-01

    The first passive millimetre wave (PMMW) imagery is presented from two proof-of-concept aperture synthesis demonstrators, developed to investigate the use of aperture synthesis for personnel security screening and all weather flying at 94 GHz, and satellite based earth observation at 183 GHz [1]. Emission from point noise sources and discharge tubes are used to examine the coherence on system baselines and to measure the point spread functions, making comparisons with theory. Image quality is examined using near field aperture synthesis and G-matrix calibration imaging algorithms. The radiometric sensitivity is measured using the emission from absorbers at elevated temperatures acting as extended sources and compared with theory. Capabilities of the latest Field Programmable Gate Arrays (FPGA) technologies for aperture synthesis PMMW imaging in all-weather and security screening applications are examined.

  11. Structured output-feedback controller synthesis with design specifications

    NASA Astrophysics Data System (ADS)

    Hao, Yuqing; Duan, Zhisheng

    2017-03-01

    This paper considers the problem of structured output-feedback controller synthesis with finite frequency specifications. Based on the orthogonal space information of input matrix, an improved parameter-dependent Lyapunov function method is first proposed. Then, a two-stage construction method is designed, which depends on an initial centralised controller. Corresponding design conditions for three types of output-feedback controllers are presented in terms of unified representations. Moreover, heuristic algorithms are provided to explore the desirable controllers. Finally, the effectiveness of these proposed methods is illustrated via some practical examples.

  12. What does voice-processing technology support today?

    PubMed Central

    Nakatsu, R; Suzuki, Y

    1995-01-01

    This paper describes the state of the art in applications of voice-processing technologies. In the first part, technologies concerning the implementation of speech recognition and synthesis algorithms are described. Hardware technologies such as microprocessors and DSPs (digital signal processors) are discussed. Software development environment, which is a key technology in developing applications software, ranging from DSP software to support software also is described. In the second part, the state of the art of algorithms from the standpoint of applications is discussed. Several issues concerning evaluation of speech recognition/synthesis algorithms are covered, as well as issues concerning the robustness of algorithms in adverse conditions. Images Fig. 3 PMID:7479720

  13. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  14. Implementation in an FPGA circuit of Edge detection algorithm based on the Discrete Wavelet Transforms

    NASA Astrophysics Data System (ADS)

    Bouganssa, Issam; Sbihi, Mohamed; Zaim, Mounia

    2017-07-01

    The 2D Discrete Wavelet Transform (DWT) is a computationally intensive task that is usually implemented on specific architectures in many imaging systems in real time. In this paper, a high throughput edge or contour detection algorithm is proposed based on the discrete wavelet transform. A technique for applying the filters on the three directions (Horizontal, Vertical and Diagonal) of the image is used to present the maximum of the existing contours. The proposed architectures were designed in VHDL and mapped to a Xilinx Sparten6 FPGA. The results of the synthesis show that the proposed architecture has a low area cost and can operate up to 100 MHz, which can perform 2D wavelet analysis for a sequence of images while maintaining the flexibility of the system to support an adaptive algorithm.

  15. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    PubMed

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  16. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

    PubMed Central

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111

  17. Approximation algorithms for scheduling unrelated parallel machines with release dates

    NASA Astrophysics Data System (ADS)

    Avdeenko, T. V.; Mesentsev, Y. A.; Estraykh, I. V.

    2017-01-01

    In this paper we propose approaches to optimal scheduling of unrelated parallel machines with release dates. One approach is based on the scheme of dynamic programming modified with adaptive narrowing of search domain ensuring its computational effectiveness. We discussed complexity of the exact schedules synthesis and compared it with approximate, close to optimal, solutions. Also we explain how the algorithm works for the example of two unrelated parallel machines and five jobs with release dates. Performance results that show the efficiency of the proposed approach have been given.

  18. Negative Difference Resistance and Its Application to Construct Boolean Logic Circuits

    NASA Astrophysics Data System (ADS)

    Nikodem, Maciej; Bawiec, Marek A.; Surmacz, Tomasz R.

    Electronic circuits based on nanodevices and quantum effect are the future of logic circuits design. Today's technology allows constructing resonant tunneling diodes, quantum cellular automata and nanowires/nanoribbons that are the elementary components of threshold gates. However, synthesizing a threshold circuit for an arbitrary logic function is still a challenging task where no efficient algorithms exist. This paper focuses on Generalised Threshold Gates (GTG), giving the overview of threshold circuit synthesis methods and presenting an algorithm that considerably simplifies the task in case of GTG circuits.

  19. Optimal ancilla-free Pauli+V circuits for axial rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blass, Andreas; Bocharov, Alex; Gurevich, Yuri

    We address the problem of optimal representation of single-qubit rotations in a certain unitary basis consisting of the so-called V gates and Pauli matrices. The V matrices were proposed by Lubotsky, Philips, and Sarnak [Commun. Pure Appl. Math. 40, 401–420 (1987)] as a purely geometric construct in 1987 and recently found applications in quantum computation. They allow for exceptionally simple quantum circuit synthesis algorithms based on quaternionic factorization. We adapt the deterministic-search technique initially proposed by Ross and Selinger to synthesize approximating Pauli+V circuits of optimal depth for single-qubit axial rotations. Our synthesis procedure based on simple SL{sub 2}(ℤ) geometrymore » is almost elementary.« less

  20. Robustness of Flexible Systems With Component-Level Uncertainties

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.

    2000-01-01

    Robustness of flexible systems in the presence of model uncertainties at the component level is considered. Specifically, an approach for formulating robustness of flexible systems in the presence of frequency and damping uncertainties at the component level is presented. The synthesis of the components is based on a modifications of a controls-based algorithm for component mode synthesis. The formulation deals first with robustness of synthesized flexible systems. It is then extended to deal with global (non-synthesized ) dynamic models with component-level uncertainties by projecting uncertainties from component levels to system level. A numerical example involving a two-dimensional simulated docking problem is worked out to demonstrate the feasibility of the proposed approach.

  1. Grey Wolf based control for speed ripple reduction at low speed operation of PMSM drives.

    PubMed

    Djerioui, Ali; Houari, Azeddine; Ait-Ahmed, Mourad; Benkhoris, Mohamed-Fouad; Chouder, Aissa; Machmoum, Mohamed

    2018-03-01

    Speed ripple at low speed-high torque operation of Permanent Magnet Synchronous Machine (PMSM) drives is considered as one of the major issues to be treated. The presented work proposes an efficient PMSM speed controller based on Grey Wolf (GW) algorithm to ensure a high-performance control for speed ripple reduction at low speed operation. The main idea of the proposed control algorithm is to propose a specific objective function in order to incorporate the advantage of fast optimization process of the GW optimizer. The role of GW optimizer is to find the optimal input controls that satisfy the speed tracking requirements. The synthesis methodology of the proposed control algorithm is detailed and the feasibility and performances of the proposed speed controller is confirmed by simulation and experimental results. The GW algorithm is a model-free controller and the parameters of its objective function are easy to be tuned. The GW controller is compared to PI one on real test bench. Then, the superiority of the first algorithm is highlighted. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Specialized Computer Systems for Environment Visualization

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.

    2018-06-01

    The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.

  3. Adaptive Grid Based Localized Learning for Multidimensional Data

    ERIC Educational Resources Information Center

    Saini, Sheetal

    2012-01-01

    Rapid advances in data-rich domains of science, technology, and business has amplified the computational challenges of "Big Data" synthesis necessary to slow the widening gap between the rate at which the data is being collected and analyzed for knowledge. This has led to the renewed need for efficient and accurate algorithms, framework,…

  4. Combinatorial codon scrambling enables scalable gene synthesis and amplification of repetitive proteins

    NASA Astrophysics Data System (ADS)

    Tang, Nicholas C.; Chilkoti, Ashutosh

    2016-04-01

    Most genes are synthesized using seamless assembly methods that rely on the polymerase chain reaction (PCR). However, PCR of genes encoding repetitive proteins either fails or generates nonspecific products. Motivated by the need to efficiently generate new protein polymers through high-throughput gene synthesis, here we report a codon-scrambling algorithm that enables the PCR-based gene synthesis of repetitive proteins by exploiting the codon redundancy of amino acids and finding the least-repetitive synonymous gene sequence. We also show that the codon-scrambling problem is analogous to the well-known travelling salesman problem, and obtain an exact solution to it by using De Bruijn graphs and a modern mixed integer linear programme solver. As experimental proof of the utility of this approach, we use it to optimize the synthetic genes for 19 repetitive proteins, and show that the gene fragments are amenable to PCR-based gene assembly and recombinant expression.

  5. Image processing meta-algorithm development via genetic manipulation of existing algorithm graphs

    NASA Astrophysics Data System (ADS)

    Schalkoff, Robert J.; Shaaban, Khaled M.

    1999-07-01

    Automatic algorithm generation for image processing applications is not a new idea, however previous work is either restricted to morphological operates or impractical. In this paper, we show recent research result in the development and use of meta-algorithms, i.e. algorithms which lead to new algorithms. Although the concept is generally applicable, the application domain in this work is restricted to image processing. The meta-algorithm concept described in this paper is based upon out work in dynamic algorithm. The paper first present the concept of dynamic algorithms which, on the basis of training and archived algorithmic experience embedded in an algorithm graph (AG), dynamically adjust the sequence of operations applied to the input image data. Each node in the tree-based representation of a dynamic algorithm with out degree greater than 2 is a decision node. At these nodes, the algorithm examines the input data and determines which path will most likely achieve the desired results. This is currently done using nearest-neighbor classification. The details of this implementation are shown. The constrained perturbation of existing algorithm graphs, coupled with a suitable search strategy, is one mechanism to achieve meta-algorithm an doffers rich potential for the discovery of new algorithms. In our work, a meta-algorithm autonomously generates new dynamic algorithm graphs via genetic recombination of existing algorithm graphs. The AG representation is well suited to this genetic-like perturbation, using a commonly- employed technique in artificial neural network synthesis, namely the blueprint representation of graphs. A number of exam. One of the principal limitations of our current approach is the need for significant human input in the learning phase. Efforts to overcome this limitation are discussed. Future research directions are indicated.

  6. Synthesis of Multispectral Bands from Hyperspectral Data: Validation Based on Images Acquired by AVIRIS, Hyperion, ALI, and ETM+

    NASA Technical Reports Server (NTRS)

    Blonksi, Slawomir; Gasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki

    2001-01-01

    Multispectral data requirements for Earth science applications are not always studied rigorously studied before a new remote sensing system is designed. A study of the spatial resolution, spectral bandpasses, and radiometric sensitivity requirements of real-world applications would focus the design onto providing maximum benefits to the end-user community. To support systematic studies of multispectral data requirements, the Applications Research Toolbox (ART) has been developed at NASA's Stennis Space Center. The ART software allows users to create and assess simulated datasets while varying a wide range of system parameters. The simulations are based on data acquired by existing multispectral and hyperspectral instruments. The produced datasets can be further evaluated for specific end-user applications. Spectral synthesis of multispectral images from hyperspectral data is a key part of the ART software. In this process, hyperspectral image cubes are transformed into multispectral imagery without changes in spatial sampling and resolution. The transformation algorithm takes into account spectral responses of both the synthesized, broad, multispectral bands and the utilized, narrow, hyperspectral bands. To validate the spectral synthesis algorithm, simulated multispectral images are compared with images collected near-coincidentally by the Landsat 7 ETM+ and the EO-1 ALI instruments. Hyperspectral images acquired with the airborne AVIRIS instrument and with the Hyperion instrument onboard the EO-1 satellite were used as input data to the presented simulations.

  7. Multi-exposure high dynamic range image synthesis with camera shake correction

    NASA Astrophysics Data System (ADS)

    Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.

  8. An integrated science-based methodology to assess potential ...

    EPA Pesticide Factsheets

    There is an urgent need for broad and integrated studies that address the risks of engineered nanomaterials (ENMs) along the different endpoints of the society, environment, and economy (SEE) complex adaptive system. This article presents an integrated science-based methodology to assess the potential risks of engineered nanomaterials. To achieve the study objective, two major tasks are accomplished, knowledge synthesis and algorithmic computational methodology. The knowledge synthesis task is designed to capture “what is known” and to outline the gaps in knowledge from ENMs risk perspective. The algorithmic computational methodology is geared toward the provision of decisions and an understanding of the risks of ENMs along different endpoints for the constituents of the SEE complex adaptive system. The approach presented herein allows for addressing the formidable task of assessing the implications and risks of exposure to ENMs, with the long term goal to build a decision-support system to guide key stakeholders in the SEE system towards building sustainable ENMs and nano-enabled products. The following specific aims are formulated to achieve the study objective: (1) to propose a system of systems (SoS) architecture that builds a network management among the different entities in the large SEE system to track the flow of ENMs emission, fate and transport from the source to the receptor; (2) to establish a staged approach for knowledge synthesis methodo

  9. Algorithms for synthesizing management solutions based on OLAP-technologies

    NASA Astrophysics Data System (ADS)

    Pishchukhin, A. M.; Akhmedyanova, G. F.

    2018-05-01

    OLAP technologies are a convenient means of analyzing large amounts of information. An attempt was made in their work to improve the synthesis of optimal management decisions. The developed algorithms allow forecasting the needs and accepted management decisions on the main types of the enterprise resources. Their advantage is the efficiency, based on the simplicity of quadratic functions and differential equations of only the first order. At the same time, the optimal redistribution of resources between different types of products from the assortment of the enterprise is carried out, and the optimal allocation of allocated resources in time. The proposed solutions can be placed on additional specially entered coordinates of the hypercube representing the data warehouse.

  10. A new stellar spectrum interpolation algorithm and its application to Yunnan-III evolutionary population synthesis models

    NASA Astrophysics Data System (ADS)

    Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang

    2018-05-01

    In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.

  11. Toward Evolvable Hardware Chips: Experiments with a Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian

    1998-01-01

    Evolvable Hardware is reconfigurable hardware that self-configures under the control of an evolutionary algorithm. We search for a hardware configuration can be performed using software models or, faster and more accurate, directly in reconfigurable hardware. Several experiments have demonstrated the possibility to automatically synthesize both digital and analog circuits. The paper introduces an approach to automated synthesis of CMOS circuits, based on evolution on a Programmable Transistor Array (PTA). The approach is illustrated with a software experiment showing evolutionary synthesis of a circuit with a desired DC characteristic. A hardware implementation of a test PTA chip is then described, and the same evolutionary experiment is performed on the chip demonstrating circuit synthesis/self-configuration directly in hardware.

  12. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    NASA Astrophysics Data System (ADS)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  13. Plasma spectroscopy analysis technique based on optimization algorithms and spectral synthesis for arc-welding quality assurance.

    PubMed

    Mirapeix, J; Cobo, A; González, D A; López-Higuera, J M

    2007-02-19

    A new plasma spectroscopy analysis technique based on the generation of synthetic spectra by means of optimization processes is presented in this paper. The technique has been developed for its application in arc-welding quality assurance. The new approach has been checked through several experimental tests, yielding results in reasonably good agreement with the ones offered by the traditional spectroscopic analysis technique.

  14. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  15. An Algorithm for Integrated Subsystem Embodiment and System Synthesis

    NASA Technical Reports Server (NTRS)

    Lewis, Kemper

    1997-01-01

    Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.

  16. Pilot-optimal augmentation synthesis

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.

    1978-01-01

    An augmentation synthesis method usable in the absence of quantitative handling qualities specifications, and yet explicitly including design objectives based on pilot-rating concepts, is presented. The algorithm involves the unique approach of simultaneously solving for the stability augmentation system (SAS) gains, pilot equalization and pilot rating prediction via optimal control techniques. Simultaneous solution is required in this case since the pilot model (gains, etc.) depends upon the augmented plant dynamics, and the augmentation is obviously not a priori known. Another special feature is the use of the pilot's objective function (from which the pilot model evolves) to design the SAS.

  17. Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design

    NASA Technical Reports Server (NTRS)

    Whorton, Mark; Buschek, Harald; Calise, Anthony J.

    1996-01-01

    Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.

  18. Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis

    NASA Technical Reports Server (NTRS)

    Whorton, M.; Buschek, H.; Calise, A. J.

    1994-01-01

    A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.

  19. Age synthesis and estimation via faces: a survey.

    PubMed

    Fu, Yun; Guo, Guodong; Huang, Thomas S

    2010-11-01

    Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.

  20. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  1. VLSI implementation of RSA encryption system using ancient Indian Vedic mathematics

    NASA Astrophysics Data System (ADS)

    Thapliyal, Himanshu; Srinivas, M. B.

    2005-06-01

    This paper proposes the hardware implementation of RSA encryption/decryption algorithm using the algorithms of Ancient Indian Vedic Mathematics that have been modified to improve performance. The recently proposed hierarchical overlay multiplier architecture is used in the RSA circuitry for multiplication operation. The most significant aspect of the paper is the development of a division architecture based on Straight Division algorithm of Ancient Indian Vedic Mathematics and embedding it in RSA encryption/decryption circuitry for improved efficiency. The coding is done in Verilog HDL and the FPGA synthesis is done using Xilinx Spartan library. The results show that RSA circuitry implemented using Vedic division and multiplication is efficient in terms of area/speed compared to its implementation using conventional multiplication and division architectures.

  2. DSP Synthesis Algorithm for Generating Florida Scrub Jay Calls

    NASA Technical Reports Server (NTRS)

    Lane, John; Pittman, Tyler

    2017-01-01

    A prototype digital signal processing (DSP) algorithm has been developed to approximate Florida scrub jay calls. The Florida scrub jay (Aphelocoma coerulescens), believed to have been in existence for 2 million years, living only in Florida, has a complicated social system that is evident by examining the spectrograms of its calls. Audio data was acquired at the Helen and Allan Cruickshank Sanctuary, Rockledge, Florida during the 2016 mating season using three digital recorders sampling at 44.1 kHz. The synthesis algorithm is a first step at developing a robust identification and call analysis algorithm. Since the Florida scrub jay is severely threatened by loss of habitat, it is important to develop effective methods to monitor their threatened population using autonomous means.

  3. Deductive Synthesis of the Unification Algorithm,

    DTIC Science & Technology

    1981-06-01

    DEDUCTIVE SYNTHESIS OF THE I - UNIFICATION ALGORITHM Zohar Manna Richard Waldinger I F? Computer Science Department Artificial Intelligence Center...theorem proving," Artificial Intelligence Journal, Vol. 9, No. 1, pp. 1-35. Boyer, R. S. and J S. Moore [Jan. 19751, "Proving theorems about LISP...d’Intelligence Artificielle , U.E.R. de Luminy, Universit6 d’ Aix-Marseille II. Green, C. C. [May 1969], "Application of theorem proving to problem

  4. Analysis and synthesis of abstract data types through generalization from examples

    NASA Technical Reports Server (NTRS)

    Wild, Christian

    1987-01-01

    The discovery of general patterns of behavior from a set of input/output examples can be a useful technique in the automated analysis and synthesis of software systems. These generalized descriptions of the behavior form a set of assertions which can be used for validation, program synthesis, program testing and run-time monitoring. Describing the behavior is characterized as a learning process in which general patterns can be easily characterized. The learning algorithm must choose a transform function and define a subset of the transform space which is related to equivalence classes of behavior in the original domain. An algorithm for analyzing the behavior of abstract data types is presented and several examples are given. The use of the analysis for purposes of program synthesis is also discussed.

  5. Synthesis of Conformal Phased Antenna Arrays With A Novel Multiobjective Invasive Weed Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Wen Tao; Hei, Yong Qiang; Shi, Xiao Wei

    2018-04-01

    By virtue of the excellent aerodynamic performances, conformal phased arrays have been attracting considerable attention. However, for the synthesis of patterns with low/ultra-low sidelobes of the conventional conformal arrays, the obtained dynamic range ratios of amplitude excitations could be quite high, which results in stringent requirements on various error tolerances for practical implementation. Time-modulated array (TMA) has the advantages of low sidelobe and reduced dynamic range ratio requirement of amplitude excitations. This paper takes full advantages of conformal antenna arrays and time-modulated arrays. The active-element-pattern, including element mutual coupling and platform effects, is employed in the whole design process. To optimize the pulse durations and the switch-on instants of the time-modulated elements, multiobjective invasive weed optimization (MOIWO) algorithm based on the nondominated sorting of the solutions is proposed. A S-band 8-element cylindrical conformal array is designed and a S-band 16-element cylindrical-parabolic conformal array is constructed and tested at two different steering angles.

  6. Combined Optimal Control System for excavator electric drive

    NASA Astrophysics Data System (ADS)

    Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.

    2018-03-01

    The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).

  7. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  8. Model-based guiding pattern synthesis for robust assembly of contact layers using directed self-assembly

    NASA Astrophysics Data System (ADS)

    Mitra, Joydeep; Torres, Andres; Ma, Yuansheng; Pan, David Z.

    2018-01-01

    Directed self-assembly (DSA) has emerged as one of the most compelling next-generation patterning techniques for sub 7 nm via or contact layers. A key issue in enabling DSA as a mainstream patterning technique is the generation of grapho-epitaxy-based guiding pattern (GP) shapes to assemble the contact patterns on target with high fidelity and resolution. Current GP generation is mostly empirical, and limited to a very small number of via configurations. We propose the first model-based GP synthesis algorithm and methodology for on-target and robust DSA, on general via pattern configurations. The final postoptical proximity correction-printed GPs derived from our original synthesized GPs are resilient to process variations and continue to maintain the same DSA fidelity in terms of placement error and target shape.

  9. DNA-COMPACT: DNA COMpression Based on a Pattern-Aware Contextual Modeling Technique

    PubMed Central

    Li, Pinghao; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-01-01

    Genome data are becoming increasingly important for modern medicine. As the rate of increase in DNA sequencing outstrips the rate of increase in disk storage capacity, the storage and data transferring of large genome data are becoming important concerns for biomedical researchers. We propose a two-pass lossless genome compression algorithm, which highlights the synthesis of complementary contextual models, to improve the compression performance. The proposed framework could handle genome compression with and without reference sequences, and demonstrated performance advantages over best existing algorithms. The method for reference-free compression led to bit rates of 1.720 and 1.838 bits per base for bacteria and yeast, which were approximately 3.7% and 2.6% better than the state-of-the-art algorithms. Regarding performance with reference, we tested on the first Korean personal genome sequence data set, and our proposed method demonstrated a 189-fold compression rate, reducing the raw file size from 2986.8 MB to 15.8 MB at a comparable decompression cost with existing algorithms. DNAcompact is freely available at https://sourceforge.net/projects/dnacompact/for research purpose. PMID:24282536

  10. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  11. Synthesis, Analysis, and Processing of Fractal Signals

    DTIC Science & Technology

    1991-10-01

    coordinator in hockey, squash, volleyball, and softball, but also for reminding me periodically that 1/f noise can exist outside a computer. More...similar signals as Fourier-based representations are for stationary and periodic signals. Furthermore, because wave- let transformations can be...and periodic signals. Furthermore, just as the discovery of fast Fourier transform (FFT) algorithms dramatically increased the viability the Fourier

  12. Hybrid CMS methods with model reduction for assembly of structures

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1991-01-01

    Future on-orbit structures will be designed and built in several stages, each with specific control requirements. Therefore there must be a methodology which can predict the dynamic characteristics of the assembled structure, based on the dynamic characteristics of the subassemblies and their interfaces. The methodology developed by CSC to address this issue is Hybrid Component Mode Synthesis (HCMS). HCMS distinguishes itself from standard component mode synthesis algorithms in the following features: (1) it does not require the subcomponents to have displacement compatible models, which makes it ideal for analyzing the deployment of heterogeneous flexible multibody systems, (2) it incorporates a second-level model reduction scheme at the interface, which makes it much faster than other algorithms and therefore suitable for control purposes, and (3) it does answer specific questions such as 'how does the global fundamental frequency vary if I change the physical parameters of substructure k by a specified amount?'. Because it is based on an energy principle rather than displacement compatibility, this methodology can also help the designer to define an assembly process. Current and future efforts are devoted to applying the HCMS method to design and analyze docking and berthing procedures in orbital construction.

  13. OPTICON: Pro-Matlab software for large order controlled structure design

    NASA Technical Reports Server (NTRS)

    Peterson, Lee D.

    1989-01-01

    A software package for large order controlled structure design is described and demonstrated. The primary program, called OPTICAN, uses both Pro-Matlab M-file routines and selected compiled FORTRAN routines linked into the Pro-Matlab structure. The program accepts structural model information in the form of state-space matrices and performs three basic design functions on the model: (1) open loop analyses; (2) closed loop reduced order controller synthesis; and (3) closed loop stability and performance assessment. The current controller synthesis methods which were implemented in this software are based on the Generalized Linear Quadratic Gaussian theory of Bernstein. In particular, a reduced order Optimal Projection synthesis algorithm based on a homotopy solution method was successfully applied to an experimental truss structure using a 58-state dynamic model. These results are presented and discussed. Current plans to expand the practical size of the design model to several hundred states and the intention to interface Pro-Matlab to a supercomputing environment are discussed.

  14. Synthesis of most polyene natural product motifs using just twelve building blocks and one coupling reaction

    PubMed Central

    Woerly, Eric M.; Roy, Jahnabi; Burke, Martin D.

    2014-01-01

    The inherent modularity of polypeptides, oligonucleotides, and oligosaccharides has been harnessed to achieve generalized building block-based synthesis platforms. Importantly, like these other targets, most small molecule natural products are biosynthesized via iterative coupling of bifunctional building blocks. This suggests that many small molecules also possess inherent modularity commensurate with systematic building block-based construction. Supporting this hypothesis, here we report that the polyene motifs found in >75% of all known polyene natural products can be synthesized using just 12 building blocks and one coupling reaction. Using the same general retrosynthetic algorithm and reaction conditions, this platform enabled the synthesis of a wide range of polyene frameworks covering all of this natural product chemical space, and first total syntheses of the polyene natural products asnipyrone B, physarigin A, and neurosporaxanthin β-D-glucopyranoside. Collectively, these results suggest the potential for a more generalized approach for making small molecules in the laboratory. PMID:24848233

  15. Batch Mode Reinforcement Learning based on the Synthesis of Artificial Trajectories

    PubMed Central

    Fonteneau, Raphael; Murphy, Susan A.; Wehenkel, Louis; Ernst, Damien

    2013-01-01

    In this paper, we consider the batch mode reinforcement learning setting, where the central problem is to learn from a sample of trajectories a policy that satisfies or optimizes a performance criterion. We focus on the continuous state space case for which usual resolution schemes rely on function approximators either to represent the underlying control problem or to represent its value function. As an alternative to the use of function approximators, we rely on the synthesis of “artificial trajectories” from the given sample of trajectories, and show that this idea opens new avenues for designing and analyzing algorithms for batch mode reinforcement learning. PMID:24049244

  16. Using MaxCompiler for the high level synthesis of trigger algorithms

    NASA Astrophysics Data System (ADS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  17. Synthesis of recurrent neural networks for dynamical system simulation.

    PubMed

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A Comparative Study of Optimization Algorithms for Engineering Synthesis.

    DTIC Science & Technology

    1983-03-01

    the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first

  19. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    PubMed Central

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-01-01

    Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive. PMID:28604641

  20. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    PubMed

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  1. The infection algorithm: an artificial epidemic approach for dense stereo correspondence.

    PubMed

    Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne

    2006-01-01

    We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.

  2. Evidence for an inhibitory-control theory of the reasoning brain.

    PubMed

    Houdé, Olivier; Borst, Grégoire

    2015-01-01

    In this article, we first describe our general inhibitory-control theory and, then, we describe how we have tested its specific hypotheses on reasoning with brain imaging techniques in adults and children. The innovative part of this perspective lies in its attempt to come up with a brain-based synthesis of Jean Piaget's theory on logical algorithms and Daniel Kahneman's theory on intuitive heuristics.

  3. Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system

    NASA Astrophysics Data System (ADS)

    Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo

    2010-02-01

    A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.

  4. Aircraft noise synthesis system

    NASA Technical Reports Server (NTRS)

    Mccurdy, David A.; Grandle, Robert E.

    1987-01-01

    A second-generation Aircraft Noise Synthesis System has been developed to provide test stimuli for studies of community annoyance to aircraft flyover noise. The computer-based system generates realistic, time-varying, audio simulations of aircraft flyover noise at a specified observer location on the ground. The synthesis takes into account the time-varying aircraft position relative to the observer; specified reference spectra consisting of broadband, narrowband, and pure-tone components; directivity patterns; Doppler shift; atmospheric effects; and ground effects. These parameters can be specified and controlled in such a way as to generate stimuli in which certain noise characteristics, such as duration or tonal content, are independently varied, while the remaining characteristics, such as broadband content, are held constant. The system can also generate simulations of the predicted noise characteristics of future aircraft. A description of the synthesis system and a discussion of the algorithms and methods used to generate the simulations are provided. An appendix describing the input data and providing user instructions is also included.

  5. Analysis and synthesis of abstract data types through generalization from examples

    NASA Technical Reports Server (NTRS)

    Wild, Christian

    1987-01-01

    The discovery of general patterns of behavior from a set of input/output examples can be a useful technique in the automated analysis and synthesis of software systems. These generalized descriptions of the behavior form a set of assertions which can be used for validation, program synthesis, program testing, and run-time monitoring. Describing the behavior is characterized as a learning process in which the set of inputs is mapped into an appropriate transform space such that general patterns can be easily characterized. The learning algorithm must chose a transform function and define a subset of the transform space which is related to equivalence classes of behavior in the original domain. An algorithm for analyzing the behavior of abstract data types is presented and several examples are given. The use of the analysis for purposes of program synthesis is also discussed.

  6. Parallelizing quantum circuit synthesis

    NASA Astrophysics Data System (ADS)

    Di Matteo, Olivia; Mosca, Michele

    2016-03-01

    Quantum circuit synthesis is the process in which an arbitrary unitary operation is decomposed into a sequence of gates from a universal set, typically one which a quantum computer can implement both efficiently and fault-tolerantly. As physical implementations of quantum computers improve, the need is growing for tools that can effectively synthesize components of the circuits and algorithms they will run. Existing algorithms for exact, multi-qubit circuit synthesis scale exponentially in the number of qubits and circuit depth, leaving synthesis intractable for circuits on more than a handful of qubits. Even modest improvements in circuit synthesis procedures may lead to significant advances, pushing forward the boundaries of not only the size of solvable circuit synthesis problems, but also in what can be realized physically as a result of having more efficient circuits. We present a method for quantum circuit synthesis using deterministic walks. Also termed pseudorandom walks, these are walks in which once a starting point is chosen, its path is completely determined. We apply our method to construct a parallel framework for circuit synthesis, and implement one such version performing optimal T-count synthesis over the Clifford+T gate set. We use our software to present examples where parallelization offers a significant speedup on the runtime, as well as directly confirm that the 4-qubit 1-bit full adder has optimal T-count 7 and T-depth 3.

  7. Real-time generation of infrared ocean scene based on GPU

    NASA Astrophysics Data System (ADS)

    Jiang, Zhaoyi; Wang, Xun; Lin, Yun; Jin, Jianqiu

    2007-12-01

    Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is rarely considered by previous investigators. With the advance of programmable features of graphic card, many algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.

  8. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  9. Genome Calligrapher: A Web Tool for Refactoring Bacterial Genome Sequences for de Novo DNA Synthesis.

    PubMed

    Christen, Matthias; Deutsch, Samuel; Christen, Beat

    2015-08-21

    Recent advances in synthetic biology have resulted in an increasing demand for the de novo synthesis of large-scale DNA constructs. Any process improvement that enables fast and cost-effective streamlining of digitized genetic information into fabricable DNA sequences holds great promise to study, mine, and engineer genomes. Here, we present Genome Calligrapher, a computer-aided design web tool intended for whole genome refactoring of bacterial chromosomes for de novo DNA synthesis. By applying a neutral recoding algorithm, Genome Calligrapher optimizes GC content and removes obstructive DNA features known to interfere with the synthesis of double-stranded DNA and the higher order assembly into large DNA constructs. Subsequent bioinformatics analysis revealed that synthesis constraints are prevalent among bacterial genomes. However, a low level of codon replacement is sufficient for refactoring bacterial genomes into easy-to-synthesize DNA sequences. To test the algorithm, 168 kb of synthetic DNA comprising approximately 20 percent of the synthetic essential genome of the cell-cycle bacterium Caulobacter crescentus was streamlined and then ordered from a commercial supplier of low-cost de novo DNA synthesis. The successful assembly into eight 20 kb segments indicates that Genome Calligrapher algorithm can be efficiently used to refactor difficult-to-synthesize DNA. Genome Calligrapher is broadly applicable to recode biosynthetic pathways, DNA sequences, and whole bacterial genomes, thus offering new opportunities to use synthetic biology tools to explore the functionality of microbial diversity. The Genome Calligrapher web tool can be accessed at https://christenlab.ethz.ch/GenomeCalligrapher  .

  10. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems.

    PubMed

    D'Onofrio, David J; Abel, David L; Johnson, Donald E

    2012-03-14

    The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

  11. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  12. Real-time free-viewpoint DIBR for large-size 3DLED

    NASA Astrophysics Data System (ADS)

    Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru

    2017-10-01

    Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.

  13. Data-Adaptable Modeling and Optimization for Runtime Adaptable Systems

    DTIC Science & Technology

    2016-06-08

    execution scenarios e . Enables model -guided optimization algorithms that outperform state-of-the-art f. Understands the overhead of system...the Data-Adaptable System Model (DASM), that facilitates design by enabling the designer to: 1) specify both an application’s task flow as well as...systems. The MILAN [3] framework specializes in the design, simulation , and synthesis of System On Chip (SoC) applications using model -based techniques

  14. Evidence for an inhibitory-control theory of the reasoning brain

    PubMed Central

    Houdé, Olivier; Borst, Grégoire

    2015-01-01

    In this article, we first describe our general inhibitory-control theory and, then, we describe how we have tested its specific hypotheses on reasoning with brain imaging techniques in adults and children. The innovative part of this perspective lies in its attempt to come up with a brain-based synthesis of Jean Piaget’s theory on logical algorithms and Daniel Kahneman’s theory on intuitive heuristics. PMID:25852528

  15. Research on Quantum Algorithms at the Institute for Quantum Information and Matter

    DTIC Science & Technology

    2016-05-29

    local quantum computation with applications to position-based cryptography , New Journal of Physics, (09 2011): 0. doi: 10.1088/1367-2630/13/9/093036... cryptography , such as the ability to turn private-key encryption into public-key encryption. While ad hoc obfuscators exist, theoretical progress has mainly...to device-independent quantum cryptography , to quantifying entanglement, and to the classification of quantum phases of matter. Exact synthesis

  16. Synthesis of most polyene natural product motifs using just 12 building blocks and one coupling reaction.

    PubMed

    Woerly, Eric M; Roy, Jahnabi; Burke, Martin D

    2014-06-01

    The inherent modularity of polypeptides, oligonucleotides and oligosaccharides has been harnessed to achieve generalized synthesis platforms. Importantly, like these other targets, most small-molecule natural products are biosynthesized via iterative coupling of bifunctional building blocks. This suggests that many small molecules also possess inherent modularity commensurate with systematic building block-based construction. Supporting this hypothesis, here we report that the polyene motifs found in >75% of all known polyene natural products can be synthesized using just 12 building blocks and one coupling reaction. Using the same general retrosynthetic algorithm and reaction conditions, this platform enabled both the synthesis of a wide range of polyene frameworks that covered all of this natural-product chemical space and the first total syntheses of the polyene natural products asnipyrone B, physarigin A and neurosporaxanthin b-D-glucopyranoside. Collectively, these results suggest the potential for a more generalized approach to making small molecules in the laboratory.

  17. Synthesis of most polyene natural product motifs using just 12 building blocks and one coupling reaction

    NASA Astrophysics Data System (ADS)

    Woerly, Eric M.; Roy, Jahnabi; Burke, Martin D.

    2014-06-01

    The inherent modularity of polypeptides, oligonucleotides and oligosaccharides has been harnessed to achieve generalized synthesis platforms. Importantly, like these other targets, most small-molecule natural products are biosynthesized via iterative coupling of bifunctional building blocks. This suggests that many small molecules also possess inherent modularity commensurate with systematic building block-based construction. Supporting this hypothesis, here we report that the polyene motifs found in >75% of all known polyene natural products can be synthesized using just 12 building blocks and one coupling reaction. Using the same general retrosynthetic algorithm and reaction conditions, this platform enabled both the synthesis of a wide range of polyene frameworks that covered all of this natural-product chemical space and the first total syntheses of the polyene natural products asnipyrone B, physarigin A and neurosporaxanthin β-D-glucopyranoside. Collectively, these results suggest the potential for a more generalized approach to making small molecules in the laboratory.

  18. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  19. GA(M)E-QSAR: a novel, fully automatic genetic-algorithm-(meta)-ensembles approach for binary classification in ligand-based drug design.

    PubMed

    Pérez-Castillo, Yunierkis; Lazar, Cosmin; Taminau, Jonatan; Froeyen, Mathy; Cabrera-Pérez, Miguel Ángel; Nowé, Ann

    2012-09-24

    Computer-aided drug design has become an important component of the drug discovery process. Despite the advances in this field, there is not a unique modeling approach that can be successfully applied to solve the whole range of problems faced during QSAR modeling. Feature selection and ensemble modeling are active areas of research in ligand-based drug design. Here we introduce the GA(M)E-QSAR algorithm that combines the search and optimization capabilities of Genetic Algorithms with the simplicity of the Adaboost ensemble-based classification algorithm to solve binary classification problems. We also explore the usefulness of Meta-Ensembles trained with Adaboost and Voting schemes to further improve the accuracy, generalization, and robustness of the optimal Adaboost Single Ensemble derived from the Genetic Algorithm optimization. We evaluated the performance of our algorithm using five data sets from the literature and found that it is capable of yielding similar or better classification results to what has been reported for these data sets with a higher enrichment of active compounds relative to the whole actives subset when only the most active chemicals are considered. More important, we compared our methodology with state of the art feature selection and classification approaches and found that it can provide highly accurate, robust, and generalizable models. In the case of the Adaboost Ensembles derived from the Genetic Algorithm search, the final models are quite simple since they consist of a weighted sum of the output of single feature classifiers. Furthermore, the Adaboost scores can be used as ranking criterion to prioritize chemicals for synthesis and biological evaluation after virtual screening experiments.

  20. Wideband dichroic-filter design for LED-phosphor beam-combining

    DOEpatents

    Falicoff, Waqidi

    2010-12-28

    A general method is disclosed of designing two-component dichroic short-pass filters operable for incidence angle distributions over the 0-30.degree. range, and specific preferred embodiments are listed. The method is based on computer optimization algorithms for an N-layer design, specifically the N-dimensional conjugate-gradient minimization of a merit function based on difference from a target transmission spectrum, as well as subsequent cycles of needle synthesis for increasing N. A key feature of the method is the initial filter design, upon which the algorithm proceeds to iterate successive design candidates with smaller merit functions. This initial design, with high-index material H and low-index L, is (0.75 H, 0.5 L, 0.75 H)^m, denoting m (20-30) repetitions of a three-layer motif, giving rise to a filter with N=2 m+1.

  1. Pick a Color MARIA: Adaptive Sampling Enables the Rapid Identification of Complex Perovskite Nanocrystal Compositions with Defined Emission Characteristics.

    PubMed

    Bezinge, Leonard; Maceiczyk, Richard M; Lignos, Ioannis; Kovalenko, Maksym V; deMello, Andrew J

    2018-06-06

    Recent advances in the development of hybrid organic-inorganic lead halide perovskite (LHP) nanocrystals (NCs) have demonstrated their versatility and potential application in photovoltaics and as light sources through compositional tuning of optical properties. That said, due to their compositional complexity, the targeted synthesis of mixed-cation and/or mixed-halide LHP NCs still represents an immense challenge for traditional batch-scale chemistry. To address this limitation, we herein report the integration of a high-throughput segmented-flow microfluidic reactor and a self-optimizing algorithm for the synthesis of NCs with defined emission properties. The algorithm, named Multiparametric Automated Regression Kriging Interpolation and Adaptive Sampling (MARIA), iteratively computes optimal sampling points at each stage of an experimental sequence to reach a target emission peak wavelength based on spectroscopic measurements. We demonstrate the efficacy of the method through the synthesis of multinary LHP NCs, (Cs/FA)Pb(I/Br) 3 (FA = formamidinium) and (Rb/Cs/FA)Pb(I/Br) 3 NCs, using MARIA to rapidly identify reagent concentrations that yield user-defined photoluminescence peak wavelengths in the green-red spectral region. The procedure returns a robust model around a target output in far fewer measurements than systematic screening of parametric space and additionally enables the prediction of other spectral properties, such as, full-width at half-maximum and intensity, for conditions yielding NCs with similar emission peak wavelength.

  2. Hybridization of Strength Pareto Multiobjective Optimization with Modified Cuckoo Search Algorithm for Rectangular Array

    NASA Astrophysics Data System (ADS)

    Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A. Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah

    2017-04-01

    This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele’s (ZDT’s) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.

  3. Guided design of copper oxysulfide superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, Chuck-Hou; Birol, Turan; Kotliar, Gabriel

    2015-07-01

    We describe a framework for designing novel materials, combining modern first-principles electronic-structure tools, materials databases, and evolutionary algorithms capable of exploring large configurational spaces. Guided by the chemical principles introduced by Antipov et al., for the design and synthesis of the Hg-based high-temperature superconductors, we apply our framework to screen 333 proposed compositions to design a new layered copper oxysulfide, Hg(CaS)2CuO2. We evaluate the prospects of superconductivity in this oxysulfide using theories based on charge-transfer energies, orbital distillation and uniaxial strain.

  4. Bibliography of Soviet Laser Developments, Number 86, November - December 1986.

    DTIC Science & Technology

    1987-12-01

    Korenev , M.S. 0. Synthesis of the sensitive element for a fiberoptic level transducer based on irregular light guide structures. TsNIITEIpriboro...Deposit, no. 3412-prD86, 9 p. (PRSUB, no. 12, 1986, 43). 687. Korenev , M.S. (). Analysis of the characteristics of bispiral conical sensing elements...TsNIITEIpriboro. Deposit, no. 3414-prD86, 8 p. (PRSUB, no. 12, 1986, 43). 688. Korenev , M.S. (). Discrete extrapolation algorithm to process measuring

  5. EM Propagation & Atmospheric Effects Assessment

    DTIC Science & Technology

    2008-09-30

    The split-step Fourier parabolic equation ( SSPE ) algorithm provides the complex amplitude and phase (group delay) of the continuous wave (CW) signal...the APM is based on the SSPE , we are implementing the more efficient Fourier synthesis technique to determine the transfer function. To this end a...needed in order to sample H(f) via the SSPE , and indeed with the proper parameters chosen, the two pulses can be resolved in the time window shown in

  6. On orbital allotments for geostationary satellites

    NASA Technical Reports Server (NTRS)

    Gonsalvez, David J. A.; Reilly, Charles H.; Mount-Campbell, Clark A.

    1986-01-01

    The following satellite synthesis problem is addressed: communication satellites are to be allotted positions on the geostationary arc so that interference does not exceed a given acceptable level by enforcing conservative pairwise satellite separation. A desired location is specified for each satellite, and the objective is to minimize the sum of the deviations between the satellites' prescribed and desired locations. Two mixed integer programming models for the satellite synthesis problem are presented. Four solution strategies, branch-and-bound, Benders' decomposition, linear programming with restricted basis entry, and a switching heuristic, are used to find solutions to example synthesis problems. Computational results indicate the switching algorithm yields solutions of good quality in reasonable execution times when compared to the other solution methods. It is demonstrated that the switching algorithm can be applied to synthesis problems with the objective of minimizing the largest deviation between a prescribed location and the corresponding desired location. Furthermore, it is shown that the switching heuristic can use no conservative, location-dependent satellite separations in order to satisfy interference criteria.

  7. Design of a tight frame of 2D shearlets based on a fast non-iterative analysis and synthesis algorithm

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Aelterman, Jan; Luong, Hi"p.; Pižurica, Aleksandra; Philips, Wilfried

    2011-09-01

    The shearlet transform is a recent sibling in the family of geometric image representations that provides a traditional multiresolution analysis combined with a multidirectional analysis. In this paper, we present a fast DFT-based analysis and synthesis scheme for the 2D discrete shearlet transform. Our scheme conforms to the continuous shearlet theory to high extent, provides perfect numerical reconstruction (up to floating point rounding errors) in a non-iterative scheme and is highly suitable for parallel implementation (e.g. FPGA, GPU). We show that our discrete shearlet representation is also a tight frame and the redundancy factor of the transform is around 2.6, independent of the number of analysis directions. Experimental denoising results indicate that the transform performs the same or even better than several related multiresolution transforms, while having a significantly lower redundancy factor.

  8. Neural-genetic synthesis for state-space controllers based on linear quadratic regulator design for eigenstructure assignment.

    PubMed

    da Fonseca Neto, João Viana; Abreu, Ivanildo Silva; da Silva, Fábio Nogueira

    2010-04-01

    Toward the synthesis of state-space controllers, a neural-genetic model based on the linear quadratic regulator design for the eigenstructure assignment of multivariable dynamic systems is presented. The neural-genetic model represents a fusion of a genetic algorithm and a recurrent neural network (RNN) to perform the selection of the weighting matrices and the algebraic Riccati equation solution, respectively. A fourth-order electric circuit model is used to evaluate the convergence of the computational intelligence paradigms and the control design method performance. The genetic search convergence evaluation is performed in terms of the fitness function statistics and the RNN convergence, which is evaluated by landscapes of the energy and norm, as a function of the parameter deviations. The control problem solution is evaluated in the time and frequency domains by the impulse response, singular values, and modal analysis.

  9. Modelling and Simulation on Multibody Dynamics for Vehicular Cold Launch Systems Based on Subsystem Synthesis Method

    NASA Astrophysics Data System (ADS)

    Panyun, YAN; Guozhu, LIANG; Yongzhi, LU; Zhihui, QI; Xingdou, GAO

    2017-12-01

    The fast simulation of the vehicular cold launch system (VCLS) in the launch process is an essential requirement for practical engineering applications. In particular, a general and fast simulation model of the VCLS will help the designer to obtain the optimum scheme in the initial design phase. For these purposes, a system-level fast simulation model was established for the VCLS based on the subsystem synthesis method. Moreover, a comparison of the load of a seven-axis VCLS on the rigid ground through both theoretical calculations and experiments was carried out. It was found that the error of the load of the rear left outrigger is less than 7.1%, and the error of the total load of all the outriggers is less than 2.8%. Moreover, time taken for completion of the simulation model is only 9.5 min, which is 5% of the time taken by conventional algorithms.

  10. A Computationally Efficient Visual Saliency Algorithm Suitable for an Analog CMOS Implementation.

    PubMed

    D'Angelo, Robert; Wood, Richard; Lowry, Nathan; Freifeld, Geremy; Huang, Haiyao; Salthouse, Christopher D; Hollosi, Brent; Muresan, Matthew; Uy, Wes; Tran, Nhut; Chery, Armand; Poppe, Dorothy C; Sonkusale, Sameer

    2018-06-27

    Computer vision algorithms are often limited in their application by the large amount of data that must be processed. Mammalian vision systems mitigate this high bandwidth requirement by prioritizing certain regions of the visual field with neural circuits that select the most salient regions. This work introduces a novel and computationally efficient visual saliency algorithm for performing this neuromorphic attention-based data reduction. The proposed algorithm has the added advantage that it is compatible with an analog CMOS design while still achieving comparable performance to existing state-of-the-art saliency algorithms. This compatibility allows for direct integration with the analog-to-digital conversion circuitry present in CMOS image sensors. This integration leads to power savings in the converter by quantizing only the salient pixels. Further system-level power savings are gained by reducing the amount of data that must be transmitted and processed in the digital domain. The analog CMOS compatible formulation relies on a pulse width (i.e., time mode) encoding of the pixel data that is compatible with pulse-mode imagers and slope based converters often used in imager designs. This letter begins by discussing this time-mode encoding for implementing neuromorphic architectures. Next, the proposed algorithm is derived. Hardware-oriented optimizations and modifications to this algorithm are proposed and discussed. Next, a metric for quantifying saliency accuracy is proposed, and simulation results of this metric are presented. Finally, an analog synthesis approach for a time-mode architecture is outlined, and postsynthesis transistor-level simulations that demonstrate functionality of an implementation in a modern CMOS process are discussed.

  11. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  12. A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lim, Chieng-Fai

    1991-01-01

    The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.

  13. Development of a gene synthesis platform for the efficient large scale production of small genes encoding animal toxins.

    PubMed

    Sequeira, Ana Filipa; Brás, Joana L A; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A

    2016-12-01

    Gene synthesis is becoming an important tool in many fields of recombinant DNA technology, including recombinant protein production. De novo gene synthesis is quickly replacing the classical cloning and mutagenesis procedures and allows generating nucleic acids for which no template is available. In addition, when coupled with efficient gene design algorithms that optimize codon usage, it leads to high levels of recombinant protein expression. Here, we describe the development of an optimized gene synthesis platform that was applied to the large scale production of small genes encoding venom peptides. This improved gene synthesis method uses a PCR-based protocol to assemble synthetic DNA from pools of overlapping oligonucleotides and was developed to synthesise multiples genes simultaneously. This technology incorporates an accurate, automated and cost effective ligation independent cloning step to directly integrate the synthetic genes into an effective Escherichia coli expression vector. The robustness of this technology to generate large libraries of dozens to thousands of synthetic nucleic acids was demonstrated through the parallel and simultaneous synthesis of 96 genes encoding animal toxins. An automated platform was developed for the large-scale synthesis of small genes encoding eukaryotic toxins. Large scale recombinant expression of synthetic genes encoding eukaryotic toxins will allow exploring the extraordinary potency and pharmacological diversity of animal venoms, an increasingly valuable but unexplored source of lead molecules for drug discovery.

  14. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  15. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  16. Detection and inpainting of facial wrinkles using texture orientation fields and Markov random field modeling.

    PubMed

    Batool, Nazre; Chellappa, Rama

    2014-09-01

    Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.

  17. Multishaker modal testing

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.

    1985-01-01

    A component mode synthesis method for damped structures was developed and modal test methods were explored which could be employed to determine the relevant parameters required by the component mode synthesis method. Research was conducted on the following topics: (1) Development of a generalized time-domain component mode synthesis technique for damped systems; (2) Development of a frequency-domain component mode synthesis method for damped systems; and (3) Development of a system identification algorithm applicable to general damped systems. Abstracts are presented of the major publications which have been previously issued on these topics.

  18. Computer Aided Synthesis or Measurement Schemes for Telemetry applications

    DTIC Science & Technology

    1997-09-02

    5.2.5. Frame structure generation The algorithm generating the frame structure should take as inputs the sampling frequency requirements of the channels...these channels into the frame structure. Generally there can be a lot of ways to divide channels among groups. The algorithm implemented in...groups) first. The algorithm uses the function "try_permutation" recursively to distribute channels among the groups, and the function "try_subtable

  19. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  20. Optimizing doped libraries by using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Tomandl, Dirk; Schober, Andreas; Schwienhorst, Andreas

    1997-01-01

    The insertion of random sequences into protein-encoding genes in combination with biologicalselection techniques has become a valuable tool in the design of molecules that have usefuland possibly novel properties. By employing highly effective screening protocols, a functionaland unique structure that had not been anticipated can be distinguished among a hugecollection of inactive molecules that together represent all possible amino acid combinations.This technique is severely limited by its restriction to a library of manageable size. Oneapproach for limiting the size of a mutant library relies on `doping schemes', where subsetsof amino acids are generated that reveal only certain combinations of amino acids in a proteinsequence. Three mononucleotide mixtures for each codon concerned must be designed, suchthat the resulting codons that are assembled during chemical gene synthesis represent thedesired amino acid mixture on the level of the translated protein. In this paper we present adoping algorithm that `reverse translates' a desired mixture of certain amino acids into threemixtures of mononucleotides. The algorithm is designed to optimally bias these mixturestowards the codons of choice. This approach combines a genetic algorithm with localoptimization strategies based on the downhill simplex method. Disparate relativerepresentations of all amino acids (and stop codons) within a target set can be generated.Optional weighing factors are employed to emphasize the frequencies of certain amino acidsand their codon usage, and to compensate for reaction rates of different mononucleotidebuilding blocks (synthons) during chemical DNA synthesis. The effect of statistical errors thataccompany an experimental realization of calculated nucleotide mixtures on the generatedmixtures of amino acids is simulated. These simulations show that the robustness of differentoptima with respect to small deviations from calculated values depends on their concomitantfitness. Furthermore, the calculations probe the fitness landscape locally and allow apreliminary assessment of its structure.

  1. Automatic Debugging Support for UML Designs

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Swanson, Keith (Technical Monitor)

    2001-01-01

    Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.

  2. SVM classification of microaneurysms with imbalanced dataset based on borderline-SMOTE and data cleaning techniques

    NASA Astrophysics Data System (ADS)

    Wang, Qingjie; Xin, Jingmin; Wu, Jiayi; Zheng, Nanning

    2017-03-01

    Microaneurysms are the earliest clinic signs of diabetic retinopathy, and many algorithms were developed for the automatic classification of these specific pathology. However, the imbalanced class distribution of dataset usually causes the classification accuracy of true microaneurysms be low. Therefore, by combining the borderline synthetic minority over-sampling technique (BSMOTE) with the data cleaning techniques such as Tomek links and Wilson's edited nearest neighbor rule (ENN) to resample the imbalanced dataset, we propose two new support vector machine (SVM) classification algorithms for the microaneurysms. The proposed BSMOTE-Tomek and BSMOTE-ENN algorithms consist of: 1) the adaptive synthesis of the minority samples in the neighborhood of the borderline, and 2) the remove of redundant training samples for improving the efficiency of data utilization. Moreover, the modified SVM classifier with probabilistic outputs is used to divide the microaneurysm candidates into two groups: true microaneurysms and false microaneurysms. The experiments with a public microaneurysms database shows that the proposed algorithms have better classification performance including the receiver operating characteristic (ROC) curve and the free-response receiver operating characteristic (FROC) curve.

  3. Evolutionary combinatorial chemistry, a novel tool for SAR studies on peptide transport across the blood-brain barrier. Part 2. Design, synthesis and evaluation of a first generation of peptides.

    PubMed

    Teixidó, Meritxell; Belda, Ignasi; Zurita, Esther; Llorà, Xavier; Fabre, Myriam; Vilaró, Senén; Albericio, Fernando; Giralt, Ernest

    2005-12-01

    The use of high-throughput methods in drug discovery allows the generation and testing of a large number of compounds, but at the price of providing redundant information. Evolutionary combinatorial chemistry combines the selection and synthesis of biologically active compounds with artificial intelligence optimization methods, such as genetic algorithms (GA). Drug candidates for the treatment of central nervous system (CNS) disorders must overcome the blood-brain barrier (BBB). This paper reports a new genetic algorithm that searches for the optimal physicochemical properties for peptide transport across the blood-brain barrier. A first generation of peptides has been generated and synthesized. Due to the high content of N-methyl amino acids present in most of these peptides, their syntheses were especially challenging due to over-incorporations, deletions and DKP formations. Distinct fragmentation patterns during peptide cleavage have been identified. The first generation of peptides has been studied by evaluation techniques such as immobilized artificial membrane chromatography (IAMC), a cell-based assay, log Poctanol/water calculations, etc. Finally, a second generation has been proposed. (c) 2005 European Peptide Society and John Wiley & Sons, Ltd.

  4. Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.

    2017-01-01

    This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.

  5. Toward Generalization of Iterative Small Molecule Synthesis

    PubMed Central

    Lehmann, Jonathan W.; Blair, Daniel J.; Burke, Martin D.

    2018-01-01

    Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the “building block approach”, i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach. PMID:29696152

  6. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    PubMed

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  7. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  8. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  9. Robust Hinfinity position control synthesis of an electro-hydraulic servo system.

    PubMed

    Milić, Vladimir; Situm, Zeljko; Essert, Mario

    2010-10-01

    This paper focuses on the use of the techniques based on linear matrix inequalities for robust H(infinity) position control synthesis of an electro-hydraulic servo system. A nonlinear dynamic model of the hydraulic cylindrical actuator with a proportional valve has been developed. For the purpose of the feedback control an uncertain linearized mathematical model of the system has been derived. The structured (parametric) perturbations in the electro-hydraulic coefficients are taken into account. H(infinity) controller extended with an integral action is proposed. To estimate internal states of the electro-hydraulic servo system an observer is designed. Developed control algorithms have been tested experimentally in the laboratory model of an electro-hydraulic servo system. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Simulations of the synthesis of boron-nitride nanostructures in a hot, high pressure gas volume† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c8sc00667a

    PubMed Central

    Han, Longtao; Irle, Stephan; Nakai, Hiromi

    2018-01-01

    We performed nanosecond timescale computer simulations of clusterization and agglomeration processes of boron nitride (BN) nanostructures in hot, high pressure gas, starting from eleven different atomic and molecular precursor systems containing boron, nitrogen and hydrogen at various temperatures from 1500 to 6000 K. The synthesized BN nanostructures self-assemble in the form of cages, flakes, and tubes as well as amorphous structures. The simulations facilitate the analysis of chemical dynamics and we are able to predict the optimal conditions concerning temperature and chemical precursor composition for controlling the synthesis process in a high temperature gas volume, at high pressure. We identify the optimal precursor/temperature choices that lead to the nanostructures of highest quality with the highest rate of synthesis, using a novel parameter of the quality of the synthesis (PQS). Two distinct mechanisms of BN nanotube growth were found, neither of them based on the root-growth process. The simulations were performed using quantum-classical molecular dynamics (QCMD) based on the density-functional tight-binding (DFTB) quantum mechanics in conjunction with a divide-and-conquer (DC) linear scaling algorithm, as implemented in the DC-DFTB-K code, enabling the study of systems as large as 1300 atoms in canonical NVT ensembles for 1 ns time. PMID:29780513

  11. Reinforcement learning techniques for controlling resources in power networks

    NASA Astrophysics Data System (ADS)

    Kowli, Anupama Sunil

    As power grids transition towards increased reliance on renewable generation, energy storage and demand response resources, an effective control architecture is required to harness the full functionalities of these resources. There is a critical need for control techniques that recognize the unique characteristics of the different resources and exploit the flexibility afforded by them to provide ancillary services to the grid. The work presented in this dissertation addresses these needs. Specifically, new algorithms are proposed, which allow control synthesis in settings wherein the precise distribution of the uncertainty and its temporal statistics are not known. These algorithms are based on recent developments in Markov decision theory, approximate dynamic programming and reinforcement learning. They impose minimal assumptions on the system model and allow the control to be "learned" based on the actual dynamics of the system. Furthermore, they can accommodate complex constraints such as capacity and ramping limits on generation resources, state-of-charge constraints on storage resources, comfort-related limitations on demand response resources and power flow limits on transmission lines. Numerical studies demonstrating applications of these algorithms to practical control problems in power systems are discussed. Results demonstrate how the proposed control algorithms can be used to improve the performance and reduce the computational complexity of the economic dispatch mechanism in a power network. We argue that the proposed algorithms are eminently suitable to develop operational decision-making tools for large power grids with many resources and many sources of uncertainty.

  12. A Model-Based Diagnosis Framework for Distributed Systems

    DTIC Science & Technology

    2002-05-04

    of centralized compilation techniques as applied to [6] Marco Cadoli and Francesco M . Donini . A survey several areas, of which diagnosis is one. Our...for doing so than the family for that (1) Vi 1 ... m . Xi E 2V; (2) V ui(Xi[Xi E 1). tree-structured systems. For simplicity of notation, we will that (i...our diagnosis synthesis diagnoses using a likelihood weight ri assigned to each as- algorithm. sumable Ai, i = I, ... m . Using the likelihood algebra

  13. Structured codebook design in CELP

    NASA Technical Reports Server (NTRS)

    Leblanc, W. P.; Mahmoud, S. A.

    1990-01-01

    Codebook Excited Linear Protection (CELP) is a popular analysis by synthesis technique for quantizing speech at bit rates from 4 to 6 kbps. Codebook design techniques to date have been largely based on either random (often Gaussian) codebooks, or on known binary or ternary codes which efficiently map the space of (assumed white) excitation codevectors. It has been shown that by introducing symmetries into the codebook, good complexity reduction can be realized with only marginal decrease in performance. Codebook design algorithms are considered for a wide range of structured codebooks.

  14. A grouping method based on grid density and relationship for crowd evacuation simulation

    NASA Astrophysics Data System (ADS)

    Li, Yan; Liu, Hong; Liu, Guang-peng; Li, Liang; Moore, Philip; Hu, Bin

    2017-05-01

    Psychological factors affect the movement of people in the competitive or panic mode of evacuation, in which the density of pedestrians is relatively large and the distance among them is small. In this paper, a crowd is divided into groups according to their social relations to simulate the actual movement of crowd evacuation more realistically and increase the attractiveness of the group based on social force model. The force of group attraction is the synthesis of two forces; one is the attraction of the individuals generated by their social relations to gather, and the other is that of the group leader to the individuals within the group to ensure that the individuals follow the leader. The synthetic force determines the trajectory of individuals. The evacuation process is demonstrated using the improved social force model. In the improved social force model, the individuals with close social relations gradually present a closer and coordinated action while following the leader. In this paper, a grouping algorithm is proposed based on grid density and relationship via computer simulation to illustrate the features of the improved social force model. The definition of the parameters involved in the algorithm is given, and the effect of relational value on the grouping is tested. Reasonable numbers of grids and weights are selected. The effectiveness of the algorithm is shown through simulation experiments. A simulation platform is also established using the proposed grouping algorithm and the improved social force model for crowd evacuation simulation.

  15. Plan of Action for Inherited Cardiovascular Diseases: Synthesis of Recommendations and Action Algorithms.

    PubMed

    Barriales-Villa, Roberto; Gimeno-Blanes, Juan Ramón; Zorio-Grima, Esther; Ripoll-Vera, Tomás; Evangelista-Masip, Artur; Moya-Mitjans, Angel; Serratosa-Fernández, Luis; Albert-Brotons, Dimpna C; García-Pinilla, José Manuel; García-Pavía, Pablo

    2016-03-01

    The term inherited cardiovascular disease encompasses a group of cardiovascular diseases (cardiomyopathies, channelopathies, certain aortic diseases, and other syndromes) with a number of common characteristics: they have a genetic basis, a familial presentation, a heterogeneous clinical course, and, finally, can all be associated with sudden cardiac death. The present document summarizes some important concepts related to recent advances in sequencing techniques and understanding of the genetic bases of these diseases. We propose diagnostic algorithms and clinical practice recommendations and discuss controversial aspects of current clinical interest. We highlight the role of multidisciplinary referral units in the diagnosis and treatment of these conditions. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  16. Fractal based curves in musical creativity: A critical annotation

    NASA Astrophysics Data System (ADS)

    Georgaki, Anastasia; Tsolakis, Christos

    In this article we examine fractal curves and synthesis algorithms in musical composition and research. First we trace the evolution of different approaches for the use of fractals in music since the 80's by a literature review. Furthermore, we review representative fractal algorithms and platforms that implement them. Properties such as self-similarity (pink noise), correlation, memory (related to the notion of Brownian motion) or non correlation at multiple levels (white noise), can be used to develop hierarchy of criteria for analyzing different layers of musical structure. L-systems can be applied in the modelling of melody in different musical cultures as well as in the investigation of musical perception principles. Finally, we propose a critical investigation approach for the use of artificial or natural fractal curves in systematic musicology.

  17. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  18. Human versus Robots in the Discovery and Crystallization of Gigantic Polyoxometalates.

    PubMed

    Duros, Vasilios; Grizou, Jonathan; Xuan, Weimin; Hosni, Zied; Long, De-Liang; Miras, Haralampos N; Cronin, Leroy

    2017-08-28

    The discovery of new gigantic molecules formed by self-assembly and crystal growth is challenging as it combines two contingent events; first is the formation of a new molecule, and second its crystallization. Herein, we construct a workflow that can be followed manually or by a robot to probe the envelope of both events and employ it for a new polyoxometalate cluster, Na 6 [Mo 120 Ce 6 O 366 H 12 (H 2 O) 78 ]⋅200 H 2 O (1) which has a trigonal-ring type architecture (yield 4.3 % based on Mo). Its synthesis and crystallization was probed using an active machine-learning algorithm developed by us to explore the crystallization space, the algorithm results were compared with those obtained by human experimenters. The algorithm-based search is able to cover ca. 9 times more crystallization space than a random search and ca. 6 times more than humans and increases the crystallization prediction accuracy to 82.4±0.7 % over 77.1±0.9 % from human experimenters. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  19. Ontological Problem-Solving Framework for Assigning Sensor Systems and Algorithms to High-Level Missions

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081

  20. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  1. High-frequency analysis of Earth gravity field models based on terrestrial gravity and GPS/levelling data: a case study in Greece

    NASA Astrophysics Data System (ADS)

    Papanikolaou, T. D.; Papadopoulos, N.

    2015-06-01

    The present study aims at the validation of global gravity field models through numerical investigation in gravity field functionals based on spherical harmonic synthesis of the geopotential models and the analysis of terrestrial data. We examine gravity models produced according to the latest approaches for gravity field recovery based on the principles of the Gravity field and steadystate Ocean Circulation Explorer (GOCE) and Gravity Recovery And Climate Experiment (GRACE) satellite missions. Furthermore, we evaluate the overall spectrum of the ultra-high degree combined gravity models EGM2008 and EIGEN-6C3stat. The terrestrial data consist of gravity and collocated GPS/levelling data in the overall Hellenic region. The software presented here implements the algorithm of spherical harmonic synthesis in a degree-wise cumulative sense. This approach may quantify the bandlimited performance of the individual models by monitoring the degree-wise computed functionals against the terrestrial data. The degree-wise analysis performed yields insight in the short-wavelengths of the Earth gravity field as these are expressed by the high degree harmonics.

  2. The Separatrix Algorithm for Synthesis and Analysis of Stochastic Simulations with Applications in Disease Modeling

    PubMed Central

    Klein, Daniel J.; Baym, Michael; Eckhoff, Philip

    2014-01-01

    Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by ), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which “success” is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria. PMID:25078087

  3. Quantum-behaved particle swarm optimization for the synthesis of fibre Bragg gratings filter

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Sun, Yunxu; Yao, Yong; Tian, Jiajun; Cong, Shan

    2011-12-01

    A method based on the quantum-behaved particle swarm optimization algorithm is presented to design a bandpass filter of the fibre Bragg gratings. In contrast to the other optimization algorithms such as the genetic algorithm and particle swarm optimization algorithm, this method is simpler and easier to implement. To demonstrate the effectiveness of the QPSO algorithm, we consider a bandpass filter. With the parameters the half the bandwidth of the filter 0.05 nm, the Bragg wavelength 1550 nm, the grating length with 2cm is divided into 40 uniform sections and its index modulation is what should be optimized and whole feasible solution space is searched for the index modulation. After the index modulation profile is known for all the sections, the transfer matrix method is used to verify the final optimal index modulation by calculating the refection spectrum. The results show the group delay is less than 12ps in band and the calculated dispersion is relatively flat inside the passband. It is further found that the reflective spectrum has sidelobes around -30dB and the worst in-band dispersion value is less than 200ps/nm . In addition, for this design, it takes approximately several minutes to find the acceptable index modulation values with a notebook computer.

  4. Guidelines and algorithms for managing the difficult airway.

    PubMed

    Gómez-Ríos, M A; Gaitini, L; Matter, I; Somri, M

    2018-01-01

    The difficult airway constitutes a continuous challenge for anesthesiologists. Guidelines and algorithms are key to preserving patient safety, by recommending specific plans and strategies that address predicted or unexpected difficult airway. However, there are currently no "gold standard" algorithms or universally accepted standards. The aim of this article is to present a synthesis of the recommendations of the main guidelines and difficult airway algorithms. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  5. Engineering calculations for solving the orbital allotment problem

    NASA Technical Reports Server (NTRS)

    Reilly, C.; Walton, E. K.; Mount-Campbell, C.; Caldecott, R.; Aebker, E.; Mata, F.

    1988-01-01

    Four approaches for calculating downlink interferences for shaped-beam antennas are described. An investigation of alternative mixed-integer programming models for satellite synthesis is summarized. Plans for coordinating the various programs developed under this grant are outlined. Two procedures for ordering satellites to initialize the k-permutation algorithm are proposed. Results are presented for the k-permutation algorithms. Feasible solutions are found for 5 of the 6 problems considered. Finally, it is demonstrated that the k-permutation algorithm can be used to solve arc allotment problems.

  6. Music 4C, a multi-voiced synthesis program with instruments defined in C

    NASA Astrophysics Data System (ADS)

    Beauchamp, James W.

    2003-04-01

    Music 4C is a program which runs under Unix (including Linux) and provides a means for the synthesis of arbitrary signals as defined by the C code. The program is actually a loose translation of an earlier program, Music 4BF [H. S. Howe, Jr., Electronic Music Synthesis (Norton, 1975)]. A set of instrument definitions are driven by a numerical score which consists of a series of ``events.'' Each event gives an instrument name, start time and duration, and a number of parameters (e.g., pitch) which describe the event. Each instrument definition consists of event parameters, performance variables, initializations, and a synthesis algorithmic code. Thus, the synthetic signal, no matter how complex, is precisely defined. Moreover, the resulting sounds can be overlaid in any arbitrary pattern. The program serves as a mixer of algorithmically produced sounds or recorded sounds taken from sample files or synthesized from spectrum files. A score file can be entered by hand, generated from a program, translated from a MIDI file, or generated from an alpha-numeric score using an auxiliary program, Notepro. Output sample files are in wav, snd, or aiff format. The program is provided in the C source code for download.

  7. A Design Study of Onboard Navigation and Guidance During Aerocapture at Mars. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Fuhry, Douglas Paul

    1988-01-01

    The navigation and guidance of a high lift-to-drag ratio sample return vehicle during aerocapture at Mars are investigated. Emphasis is placed on integrated systems design, with guidance algorithm synthesis and analysis based on vehicle state and atmospheric density uncertainty estimates provided by the navigation system. The latter utilizes a Kalman filter for state vector estimation, with useful update information obtained through radar altimeter measurements and density altitude measurements based on IMU-measured drag acceleration. A three-phase guidance algorithm, featuring constant bank numeric predictor/corrector atmospheric capture and exit phases and an extended constant altitude cruise phase, is developed to provide controlled capture and depletion of orbital energy, orbital plane control, and exit apoapsis control. Integrated navigation and guidance systems performance are analyzed using a four degree-of-freedom computer simulation. The simulation environment includes an atmospheric density model with spatially correlated perturbations to provide realistic variations over the vehicle trajectory. Navigation filter initial conditions for the analysis are based on planetary approach optical navigation results. Results from a selection of test cases are presented to give insight into systems performance.

  8. Robust 3D face landmark localization based on local coordinate coding.

    PubMed

    Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J

    2014-12-01

    In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy.

  9. Computer Assisted Design, Prediction, and Execution of Economical Organic Syntheses

    NASA Astrophysics Data System (ADS)

    Gothard, Nosheen Akber

    The synthesis of useful organic molecules via simple and cost-effective routes is a core challenge in organic chemistry. In industry or academia, organic chemists use their chemical intuition, technical expertise and published procedures to determine an optimal pathway. This approach, not only takes time and effort, but also is cost prohibitive. Many potential optimal routes scratched on paper fail to get experimentally tested. In addition, with new methods being discovered daily are often overlooked by established techniques. This thesis reports a computational technique that assist the discovery of economical synthetic routes to useful organic targets. Organic chemistry exists as a network where chemicals are connected by reactions, analogous to citied connected by roads in a geographic map. This network topology of organic reactions in the network of organic chemistry (NOC) allows the application of graph-theory to devise algorithms for synthetic optimization of organic targets. A computational approach comprised of customizable algorithms, pre-screening filters, and existing chemoinformatic techniques is capable of answering complex questions and perform mechanistic tasks desired by chemists such as optimization of organic syntheses. One-pot reactions are central to modern synthesis since they save resources and time by avoiding isolation, purification, characterization, and production of chemical waste after each synthetic step. Sometimes, such reactions are identified by chance or, more often, by careful inspection of individual steps that are to be wired together. Algorithms are used to discover one-pot reactions and validated experimentally. Which demonstrate that the computationally predicted sequences can indeed by carried out experimentally in good overall yields. The experimental examples are chosen to from small networks of reactions around useful chemicals such as quinoline scaffolds, quinoline-based inhibitors of phosphoinositide 3-kinase delta (PI3Kdelta) enzyme, and thiophene derivatives. In this way, we replace individual synthetic connections with two-, three-, or even four-step one-pot sequences. Lastly, the computational method is utilized to devise hypothetical synthetic route to popular pharmaceutical drugs like NaproxenRTM and TaxolRTM. The algorithmically generated optimal pathways are evaluated with chemistry logic. By applying labor/cost factor It was revealed that not all shorter synthesis routes are economical, sometimes "Longest way round is the shortest way home" lengthier routes are found to be more economical and environmentally friendly.

  10. AutoBayes Program Synthesis System Users Manual

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd

    2008-01-01

    Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.

  11. Performance comparison of attitude determination, attitude estimation, and nonlinear observers algorithms

    NASA Astrophysics Data System (ADS)

    MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.

    2017-01-01

    This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.

  12. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1987-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  13. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1988-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  14. EAGLE Monitors by Collecting Facts and Generating Obligations

    NASA Technical Reports Server (NTRS)

    Barrnger, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. A monitor for an EAGLE formula checks if a finite trace of states satisfies the given formula. We present, in details, an algorithm for the synthesis of monitors for EAGLE. The algorithm is implemented as a Java application and involves novel techniques for rule definition, manipulation and execution. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace of states. Our initial experiments have been successful as EAGLE detected a previously unknown bug while testing a planetary rover controller.

  15. Systems Engineering Design Via Experimental Operation Research: Complex Organizational Metric for Programmatic Risk Environments (COMPRE)

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    1999-01-01

    Unique and innovative graph theory, neural network, organizational modeling, and genetic algorithms are applied to the design and evolution of programmatic and organizational architectures. Graph theory representations of programs and organizations increase modeling capabilities and flexibility, while illuminating preferable programmatic/organizational design features. Treating programs and organizations as neural networks results in better system synthesis, and more robust data modeling. Organizational modeling using covariance structures enhances the determination of organizational risk factors. Genetic algorithms improve programmatic evolution characteristics, while shedding light on rulebase requirements for achieving specified technological readiness levels, given budget and schedule resources. This program of research improves the robustness and verifiability of systems synthesis tools, including the Complex Organizational Metric for Programmatic Risk Environments (COMPRE).

  16. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Analysis of energy-based algorithms for RNA secondary structure prediction

    PubMed Central

    2012-01-01

    Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803

  18. Analysis of energy-based algorithms for RNA secondary structure prediction.

    PubMed

    Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H

    2012-02-01

    RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.

  19. Camera-enabled techniques for organic synthesis

    PubMed Central

    Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L

    2013-01-01

    Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820

  20. On l(1): Optimal decentralized performance

    NASA Technical Reports Server (NTRS)

    Sourlas, Dennis; Manousiouthakis, Vasilios

    1993-01-01

    In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.

  1. Spectrum orbit utilization program technical manual SOUP5 Version 3.8

    NASA Technical Reports Server (NTRS)

    Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.

    1984-01-01

    The underlying engineering and mathematical models as well as the computational methods used by the SOUP5 analysis programs, which are part of the R2BCSAT-83 Broadcast Satellite Computational System, are described. Included are the algorithms used to calculate the technical parameters and references to the relevant technical literature. The system provides the following capabilities: requirements file maintenance, data base maintenance, elliptical satellite beam fitting to service areas, plan synthesis from specified requirements, plan analysis, and report generation/query. Each of these functions are briefly described.

  2. The Role of Ontologies in Schema-based Program Synthesis

    NASA Technical Reports Server (NTRS)

    Bures, Tomas; Denney, Ewen; Fischer, Bernd; Nistor, Eugen C.

    2004-01-01

    Program synthesis is the process of automatically deriving executable code from (non-executable) high-level specifications. It is more flexible and powerful than conventional code generation techniques that simply translate algorithmic specifications into lower-level code or only create code skeletons from structural specifications (such as UML class diagrams). Key to building a successful synthesis system is specializing to an appropriate application domain. The AUTOBAYES and AUTOFILTER systems, under development at NASA Ames, operate in the two domains of data analysis and state estimation, respectively. The central concept of both systems is the schema, a representation of reusable computational knowledge. This can take various forms, including high-level algorithm templates, code optimizations, datatype refinements, or architectural information. A schema also contains applicability conditions that are used to determine when it can be applied safely. These conditions can refer to the initial specification, to intermediate results, or to elements of the partially-instantiated code. Schema-based synthesis uses AI technology to recursively apply schemas to gradually refine a specification into executable code. This process proceeds in two main phases. A front-end gradually transforms the problem specification into a program represented in an abstract intermediate code. A backend then compiles this further down into a concrete target programming language of choice. A core engine applies schemas on the initial problem specification, then uses the output of those schemas as the input for other schemas, until the full implementation is generated. Since there might be different schemas that implement different solutions to the same problem this process can generate an entire solution tree. AUTOBAYES and AUTOFILTER have reached the level of maturity where they enable users to solve interesting application problems, e.g., the analysis of Hubble Space Telescope images. They are large (in total around 100kLoC Prolog), knowledge intensive systems that employ complex symbolic reasoning to generate a wide range of non-trivial programs for complex application do- mains. Their schemas can have complex interactions, which make it hard to change them in isolation or even understand what an existing schema actually does. Adding more capabilities by increasing the number of schemas will only worsen this situation, ultimately leading to the entropy death of the synthesis system. The root came of this problem is that the domain knowledge is scattered throughout the entire system and only represented implicitly in the schema implementations. In our current work, we are addressing this problem by making explicit the knowledge from Merent parts of the synthesis system. Here; we discuss how Gruber's definition of an ontology as an explicit specification of a conceptualization matches our efforts in identifying and explicating the domain-specific concepts. We outline the dual role ontologies play in schema-based synthesis and argue that they address different audiences and serve different purposes. Their first role is descriptive: they serve as explicit documentation, and help to understand the internal structure of the system. Their second role is prescriptive: they provide the formal basis against which the other parts of the system (e.g., schemas) can be checked. Their final role is referential: ontologies also provide semantically meaningful "hooks" which allow schemas and tools to access the internal state of the program derivation process (e.g., fragments of the generated code) in domain-specific rather than language-specific terms, and thus to modify it in a controlled fashion. For discussion purposes we use AUTOLINEAR, a small synthesis system we are currently experimenting with, which can generate code for solving a system of linear equations, Az = b.

  3. Evaluation of various LandFlux evapotranspiration algorithms using the LandFlux-EVAL synthesis benchmark products and observational data

    NASA Astrophysics Data System (ADS)

    Michel, Dominik; Hirschi, Martin; Jimenez, Carlos; McCabe, Mathew; Miralles, Diego; Wood, Eric; Seneviratne, Sonia

    2014-05-01

    Research on climate variations and the development of predictive capabilities largely rely on globally available reference data series of the different components of the energy and water cycles. Several efforts aimed at producing large-scale and long-term reference data sets of these components, e.g. based on in situ observations and remote sensing, in order to allow for diagnostic analyses of the drivers of temporal variations in the climate system. Evapotranspiration (ET) is an essential component of the energy and water cycle, which can not be monitored directly on a global scale by remote sensing techniques. In recent years, several global multi-year ET data sets have been derived from remote sensing-based estimates, observation-driven land surface model simulations or atmospheric reanalyses. The LandFlux-EVAL initiative presented an ensemble-evaluation of these data sets over the time periods 1989-1995 and 1989-2005 (Mueller et al. 2013). Currently, a multi-decadal global reference heat flux data set for ET at the land surface is being developed within the LandFlux initiative of the Global Energy and Water Cycle Experiment (GEWEX). This LandFlux v0 ET data set comprises four ET algorithms forced with a common radiation and surface meteorology. In order to estimate the agreement of this LandFlux v0 ET data with existing data sets, it is compared to the recently available LandFlux-EVAL synthesis benchmark product. Additional evaluation of the LandFlux v0 ET data set is based on a comparison to in situ observations of a weighing lysimeter from the hydrological research site Rietholzbach in Switzerland. These analyses serve as a test bed for similar evaluation procedures that are envisaged for ESA's WACMOS-ET initiative (http://wacmoset.estellus.eu). Reference: Mueller, B., Hirschi, M., Jimenez, C., Ciais, P., Dirmeyer, P. A., Dolman, A. J., Fisher, J. B., Jung, M., Ludwig, F., Maignan, F., Miralles, D. G., McCabe, M. F., Reichstein, M., Sheffield, J., Wang, K., Wood, E. F., Zhang, Y., and Seneviratne, S. I. (2013). Benchmark products for land evapotranspiration: LandFlux-EVAL multi-data set synthesis. Hydrology and Earth System Sciences, 17(10): 3707-3720.

  4. Emotion Recognition of Weblog Sentences Based on an Ensemble Algorithm of Multi-label Classification and Word Emotions

    NASA Astrophysics Data System (ADS)

    Li, Ji; Ren, Fuji

    Weblogs have greatly changed the communication ways of mankind. Affective analysis of blog posts is found valuable for many applications such as text-to-speech synthesis or computer-assisted recommendation. Traditional emotion recognition in text based on single-label classification can not satisfy higher requirements of affective computing. In this paper, the automatic identification of sentence emotion in weblogs is modeled as a multi-label text categorization task. Experiments are carried out on 12273 blog sentences from the Chinese emotion corpus Ren_CECps with 8-dimension emotion annotation. An ensemble algorithm RAKEL is used to recognize dominant emotions from the writer's perspective. Our emotion feature using detailed intensity representation for word emotions outperforms the other main features such as the word frequency feature and the traditional lexicon-based feature. In order to deal with relatively complex sentences, we integrate grammatical characteristics of punctuations, disjunctive connectives, modification relations and negation into features. It achieves 13.51% and 12.49% increases for Micro-averaged F1 and Macro-averaged F1 respectively compared to the traditional lexicon-based feature. Result shows that multiple-dimension emotion representation with grammatical features can efficiently classify sentence emotion in a multi-label problem.

  5. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less

  6. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the conditional probability density function (PDF) of precipitations approximated by the kernel density estimation are calculated respectively for each weather types. In the synthesis step, 100 patterns of synthesis data are generated. First, the weather type of the n-th day are determined by the results of K-means clustering. The associated transition matrix and PDF of the weather type were also determined for the usage of the next sub-step in the synthesis process. Second, the precipitation condition, dry or wet, can be synthesized basing on the transition matrix. If the synthesized condition is dry, the quantity of precipitation is zero; otherwise, the quantity should be further determined in the third sub-step. Third, the quantity of the synthesized precipitation is assigned as the random variable of the PDF defined above. The synthesis efficiency compares the gap of the monthly mean curves and monthly standard deviation curves between the historical precipitation data and the 100 patterns of synthesis data.

  7. Wide-Field Imaging Interferometry Spatial-Spectral Image Synthesis Algorithms

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.; Leisawitz, David T.; Rinehart, Stephen A.; Memarsadeghi, Nargess; Sinukoff, Evan J.

    2012-01-01

    Developed is an algorithmic approach for wide field of view interferometric spatial-spectral image synthesis. The data collected from the interferometer consists of a set of double-Fourier image data cubes, one cube per baseline. These cubes are each three-dimensional consisting of arrays of two-dimensional detector counts versus delay line position. For each baseline a moving delay line allows collection of a large set of interferograms over the 2D wide field detector grid; one sampled interferogram per detector pixel per baseline. This aggregate set of interferograms, is algorithmically processed to construct a single spatial-spectral cube with angular resolution approaching the ratio of the wavelength to longest baseline. The wide field imaging is accomplished by insuring that the range of motion of the delay line encompasses the zero optical path difference fringe for each detector pixel in the desired field-of-view. Each baseline cube is incoherent relative to all other baseline cubes and thus has only phase information relative to itself. This lost phase information is recovered by having point, or otherwise known, sources within the field-of-view. The reference source phase is known and utilized as a constraint to recover the coherent phase relation between the baseline cubes and is key to the image synthesis. Described will be the mathematical formalism, with phase referencing and results will be shown using data collected from NASA/GSFC Wide-Field Imaging Interferometry Testbed (WIIT).

  8. Design synthesis and optimization of joined-wing transports

    NASA Technical Reports Server (NTRS)

    Gallman, John W.; Smith, Stephen C.; Kroo, Ilan M.

    1990-01-01

    A computer program for aircraft synthesis using a numerical optimizer was developed to study the application of the joined-wing configuration to transport aircraft. The structural design algorithm included the effects of secondary bending moments to investigate the possibility of tail buckling and to design joined wings resistant to buckling. The structural weight computed using this method was combined with a statistically-based method to obtain realistic estimates of total lifting surface weight and aircraft empty weight. A variety of 'optimum' joined-wing and conventional aircraft designs were compared on the basis of direct operating cost, gross weight, and cruise drag. The most promising joined-wing designs were found to have a joint location at about 70 percent of the wing semispan. The optimum joined-wing transport is shown to save 1.7 percent in direct operating cost and 11 percent in drag for a 2000 nautical mile transport mission.

  9. Medical image enhancement using resolution synthesis

    NASA Astrophysics Data System (ADS)

    Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.

    2011-03-01

    We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.

  10. Optical chirp z-transform processor with a simplified architecture.

    PubMed

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  11. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  12. Density Control of Multi-Agent Systems with Safety Constraints: A Markov Chain Approach

    NASA Astrophysics Data System (ADS)

    Demirer, Nazli

    The control of systems with autonomous mobile agents has been a point of interest recently, with many applications like surveillance, coverage, searching over an area with probabilistic target locations or exploring an area. In all of these applications, the main goal of the swarm is to distribute itself over an operational space to achieve mission objectives specified by the density of swarm. This research focuses on the problem of controlling the distribution of multi-agent systems considering a hierarchical control structure where the whole swarm coordination is achieved at the high-level and individual vehicle/agent control is managed at the low-level. High-level coordination algorithms uses macroscopic models that describes the collective behavior of the whole swarm and specify the agent motion commands, whose execution will lead to the desired swarm behavior. The low-level control laws execute the motion to follow these commands at the agent level. The main objective of this research is to develop high-level decision control policies and algorithms to achieve physically realizable commanding of the agents by imposing mission constraints on the distribution. We also make some connections with decentralized low-level motion control. This dissertation proposes a Markov chain based method to control the density distribution of the whole system where the implementation can be achieved in a decentralized manner with no communication between agents since establishing communication with large number of agents is highly challenging. The ultimate goal is to guide the overall density distribution of the system to a prescribed steady-state desired distribution while satisfying desired transition and safety constraints. Here, the desired distribution is determined based on the mission requirements, for example in the application of area search, the desired distribution should match closely with the probabilistic target locations. The proposed method is applicable for both systems with a single agent and systems with large number of agents due to the probabilistic nature, where the probability distribution of each agent's state evolves according to a finite-state and discrete-time Markov chain (MC). Hence, designing proper decision control policies requires numerically tractable solution methods for the synthesis of Markov chains. The synthesis problem has the form of a Linear Matrix Inequality Problem (LMI), with LMI formulation of the constraints. To this end, we propose convex necessary and sufficient conditions for safety constraints in Markov chains, which is a novel result in the Markov chain literature. In addition to LMI-based, offline, Markov matrix synthesis method, we also propose a QP-based, online, method to compute a time-varying Markov matrix based on the real-time density feedback. Both problems are convex optimization problems that can be solved in a reliable and tractable way, utilizing existing tools in the literature. A Low Earth Orbit (LEO) swarm simulations are presented to validate the effectiveness of the proposed algorithms. Another problem tackled as a part of this research is the generalization of the density control problem to autonomous mobile agents with two control modes: ON and OFF. Here, each mode consists of a (possibly overlapping) finite set of actions, that is, there exist a set of actions for the ON mode and another set for the OFF mode. We give formulation for a new Markov chain synthesis problem, with additional measurements for the state transitions, where a policy is designed to ensure desired safety and convergence properties for the underlying Markov chain.

  13. Asymptotic Eigenstructures

    NASA Technical Reports Server (NTRS)

    Thompson, P. M.; Stein, G.

    1980-01-01

    The behavior of the closed loop eigenstructure of a linear system with output feedback is analyzed as a single parameter multiplying the feedback gain is varied. An algorithm is presented that computes the asymptotically infinite eigenstructure, and it is shown how a system with high gain, feedback decouples into single input, single output systems. Then a synthesis algorithm is presented which uses full state feedback to achieve a desired asymptotic eigenstructure.

  14. Design of a broadband active silencer using μ-synthesis

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Zeung, Pingshun

    2004-01-01

    A robust spatially feedforward controller is developed for broadband attenuation of noise in ducts. To meet the requirements of robust performance and robust stability in the presence of plant uncertainties, a μ-synthesis procedure via D- K iteration is exploited to obtain the optimal controller. This approach considers uncertainties as modelling errors of the nominal plant in high frequency and is implemented using a floating point digital signal processor (DSP). Experimental investigation was undertaken on a finite-length duct to justify the proposed controller. The μ- controller is compared to other control algorithms such as the H2 method, the H∞ method and the filtered-U least mean square (FULMS) algorithm. Experimental results indicate that the proposed system has attained 25.8 dB maximal attenuation in the band 250-650 Hz.

  15. Assessing the quality of restored images in optical long-baseline interferometry

    NASA Astrophysics Data System (ADS)

    Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric

    2017-03-01

    Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.

  16. Design of a composite filter realizable on practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Ramakrishnan, Ramachandran

    1994-01-01

    Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.

  17. Quantitative assessment of 12-lead ECG synthesis using CAVIAR.

    PubMed

    Scherer, J A; Rubel, P; Fayn, J; Willems, J L

    1992-01-01

    The objective of this study is to assess the performance of patient-specific segment-specific (PSSS) synthesis in QRST complexes using CAVIAR, a new method of the serial comparison for electrocardiograms and vectorcardiograms. A collection of 250 multi-lead recordings from the Common Standards for Quantitative Electrocardiography (CSE) diagnostic pilot study is employed. QRS and ST-T segments are independently synthesized using the PSSS algorithm so that the mean-squared error between the original and estimated waveforms is minimized. CAVIAR compares the recorded and synthesized QRS and ST-T segments and calculates the mean-quadratic deviation as a measure of error. The results of this study indicate that estimated QRS complexes are good representatives of their recorded counterparts, and the integrity of the spatial information is maintained by the PSSS synthesis process. Analysis of the ST-T segments suggests that the deviations between recorded and synthesized waveforms are considerably greater than those associated with the QRS complexes. The poorer performance of the ST-T segments is attributed to magnitude normalization of the spatial loops, low-voltage passages, and noise interference. Using the mean-quadratic deviation and CAVIAR as methods of performance assessment, this study indicates that the PSSS-synthesis algorithm accurately maintains the signal information within the 12-lead electrocardiogram.

  18. Library-based illumination synthesis for critical CMOS patterning.

    PubMed

    Yu, Jue-Chin; Yu, Peichen; Chao, Hsueh-Yung

    2013-07-01

    In optical microlithography, the illumination source for critical complementary metal-oxide-semiconductor layers needs to be determined in the early stage of a technology node with very limited design information, leading to simple binary shapes. Recently, the availability of freeform sources permits us to increase pattern fidelity and relax mask complexities with minimal insertion risks to the current manufacturing flow. However, source optimization across many patterns is often treated as a design-of-experiments problem, which may not fully exploit the benefits of a freeform source. In this paper, a rigorous source-optimization algorithm is presented via linear superposition of optimal sources for pre-selected patterns. We show that analytical solutions are made possible by using Hopkins formulation and quadratic programming. The algorithm allows synthesized illumination to be linked with assorted pattern libraries, which has a direct impact on design rule studies for early planning and design automation for full wafer optimization.

  19. "Genetically Engineered" Nanoelectronics

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Salazar-Lazaro, Carlos H.; Stoica, Adrian; Cwik, Thomas

    2000-01-01

    The quantum mechanical functionality of nanoelectronic devices such as resonant tunneling diodes (RTDs), quantum well infrared-photodetectors (QWIPs), quantum well lasers, and heterostructure field effect transistors (HFETs) is enabled by material variations on an atomic scale. The design and optimization of such devices requires a fundamental understanding of electron transport in such dimensions. The Nanoelectronic Modeling Tool (NEMO) is a general-purpose quantum device design and analysis tool based on a fundamental non-equilibrium electron transport theory. NEW was combined with a parallelized genetic algorithm package (PGAPACK) to evolve structural and material parameters to match a desired set of experimental data. A numerical experiment that evolves structural variations such as layer widths and doping concentrations is performed to analyze an experimental current voltage characteristic. The genetic algorithm is found to drive the NEMO simulation parameters close to the experimentally prescribed layer thicknesses and doping profiles. With such a quantitative agreement between theory and experiment design synthesis can be performed.

  20. A sound quality model for objective synthesis evaluation of vehicle interior noise based on artificial neural network

    NASA Astrophysics Data System (ADS)

    Wang, Y. S.; Shen, G. Q.; Xing, Y. F.

    2014-03-01

    Based on the artificial neural network (ANN) technique, an objective sound quality evaluation (SQE) model for synthesis annoyance of vehicle interior noises is presented in this paper. According to the standard named GB/T18697, firstly, the interior noises under different working conditions of a sample vehicle are measured and saved in a noise database. Some mathematical models for loudness, sharpness and roughness of the measured vehicle noises are established and performed by Matlab programming. Sound qualities of the vehicle interior noises are also estimated by jury tests following the anchored semantic differential (ASD) procedure. Using the objective and subjective evaluation results, furthermore, an ANN-based model for synthetical annoyance evaluation of vehicle noises, so-called ANN-SAE, is developed. Finally, the ANN-SAE model is proved by some verification tests with the leave-one-out algorithm. The results suggest that the proposed ANN-SAE model is accurate and effective and can be directly used to estimate sound quality of the vehicle interior noises, which is very helpful for vehicle acoustical designs and improvements. The ANN-SAE approach may be extended to deal with other sound-related fields for product quality evaluations in SQE engineering.

  1. Efficient lossy compression implementations of hyperspectral images: tools, hardware platforms, and comparisons

    NASA Astrophysics Data System (ADS)

    García, Aday; Santos, Lucana; López, Sebastián.; Callicó, Gustavo M.; Lopez, Jose F.; Sarmiento, Roberto

    2014-05-01

    Efficient onboard satellite hyperspectral image compression represents a necessity and a challenge for current and future space missions. Therefore, it is mandatory to provide hardware implementations for this type of algorithms in order to achieve the constraints required for onboard compression. In this work, we implement the Lossy Compression for Exomars (LCE) algorithm on an FPGA by means of high-level synthesis (HSL) in order to shorten the design cycle. Specifically, we use CatapultC HLS tool to obtain a VHDL description of the LCE algorithm from C-language specifications. Two different approaches are followed for HLS: on one hand, introducing the whole C-language description in CatapultC and on the other hand, splitting the C-language description in functional modules to be implemented independently with CatapultC, connecting and controlling them by an RTL description code without HLS. In both cases the goal is to obtain an FPGA implementation. We explain the several changes applied to the original Clanguage source code in order to optimize the results obtained by CatapultC for both approaches. Experimental results show low area occupancy of less than 15% for a SRAM-based Virtex-5 FPGA and a maximum frequency above 80 MHz. Additionally, the LCE compressor was implemented into an RTAX2000S antifuse-based FPGA, showing an area occupancy of 75% and a frequency around 53 MHz. All these serve to demonstrate that the LCE algorithm can be efficiently executed on an FPGA onboard a satellite. A comparison between both implementation approaches is also provided. The performance of the algorithm is finally compared with implementations on other technologies, specifically a graphics processing unit (GPU) and a single-threaded CPU.

  2. Multi-variants synthesis of Petri nets for FPGA devices

    NASA Astrophysics Data System (ADS)

    Bukowiec, Arkadiusz; Doligalski, Michał

    2015-09-01

    There is presented new method of synthesis of application specific logic controllers for FPGA devices. The specification of control algorithm is made with use of control interpreted Petri net (PT type). It allows specifying parallel processes in easy way. The Petri net is decomposed into state-machine type subnets. In this case, each subnet represents one parallel process. For this purpose there are applied algorithms of coloring of Petri nets. There are presented two approaches of such decomposition: with doublers of macroplaces or with one global wait place. Next, subnets are implemented into two-level logic circuit of the controller. The levels of logic circuit are obtained as a result of its architectural decomposition. The first level combinational circuit is responsible for generation of next places and second level decoder is responsible for generation output symbols. There are worked out two variants of such circuits: with one shared operational memory or with many flexible distributed memories as a decoder. Variants of Petri net decomposition and structures of logic circuits can be combined together without any restrictions. It leads to existence of four variants of multi-variants synthesis.

  3. Discrete-continuous variable structural synthesis using dual methods

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.; Fleury, C.

    1980-01-01

    Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.

  4. Urinary metabolic profiling of asymptomatic acute intermittent porphyria using a rule-mining-based algorithm.

    PubMed

    Luck, Margaux; Schmitt, Caroline; Talbi, Neila; Gouya, Laurent; Caradeuc, Cédric; Puy, Hervé; Bertho, Gildas; Pallet, Nicolas

    2018-01-01

    Metabolomic profiling combines Nuclear Magnetic Resonance spectroscopy with supervised statistical analysis that might allow to better understanding the mechanisms of a disease. In this study, the urinary metabolic profiling of individuals with porphyrias was performed to predict different types of disease, and to propose new pathophysiological hypotheses. Urine 1 H-NMR spectra of 73 patients with asymptomatic acute intermittent porphyria (aAIP) and familial or sporadic porphyria cutanea tarda (f/sPCT) were compared using a supervised rule-mining algorithm. NMR spectrum buckets bins, corresponding to rules, were extracted and a logistic regression was trained. Our rule-mining algorithm generated results were consistent with those obtained using partial least square discriminant analysis (PLS-DA) and the predictive performance of the model was significant. Buckets that were identified by the algorithm corresponded to metabolites involved in glycolysis and energy-conversion pathways, notably acetate, citrate, and pyruvate, which were found in higher concentrations in the urines of aAIP compared with PCT patients. Metabolic profiling did not discriminate sPCT from fPCT patients. These results suggest that metabolic reprogramming occurs in aAIP individuals, even in the absence of overt symptoms, and supports the relationship that occur between heme synthesis and mitochondrial energetic metabolism.

  5. Tower-scale performance of four observation-based evapotranspiration algorithms within the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Michel, Dominik; Miralles, Diego; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Research on climate variations and the development of predictive capabilities largely rely on globally available reference data series of the different components of the energy and water cycles. Several efforts have recently aimed at producing large-scale and long-term reference data sets of these components, e.g. based on in situ observations and remote sensing, in order to allow for diagnostic analyses of the drivers of temporal variations in the climate system. Evapotranspiration (ET) is an essential component of the energy and water cycle, which cannot be monitored directly on a global scale by remote sensing techniques. In recent years, several global multi-year ET data sets have been derived from remote sensing-based estimates, observation-driven land surface model simulations or atmospheric reanalyses. The LandFlux-EVAL initiative presented an ensemble-evaluation of these data sets over the time periods 1989-1995 and 1989-2005 (Mueller et al. 2013). The WACMOS-ET project (http://wacmoset.estellus.eu) started in the year 2012 and constitutes an ESA contribution to the GEWEX initiative LandFlux. It focuses on advancing the development of ET estimates at global, regional and tower scales. WACMOS-ET aims at developing a Reference Input Data Set exploiting European Earth Observations assets and deriving ET estimates produced by a set of four ET algorithms covering the period 2005-2007. The algorithms used are the SEBS (Su et al., 2002), Penman-Monteith from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008) and GLEAM (Miralles et al., 2011). The algorithms are run with Fluxnet tower observations, reanalysis data (ERA-Interim), and satellite forcings. They are cross-compared and validated against in-situ data. In this presentation the performance of the different ET algorithms with respect to different temporal resolutions, hydrological regimes, land cover types (including grassland, cropland, shrubland, vegetation mosaic, savanna, woody savanna, needleleaf forest, deciduous forest and mixed forest) are evaluated at the tower-scale in 24 pre-selected study regions on three continents (Europe, North America, and Australia). References: Fisher, J. B., Tu, K.P., and Baldocchi, D.D. Global estimates of the land-atmosphere water flux based on monthly AVHRR and ISLSCP-II data, validated at 16 FLUXNET sites, Remote Sens. Environ. 112, 901-919, 2008. Jiménez, C. et al. Global intercomparison of 12 land surface heat flux estimates. J. Geophys. Res. 116, D02102, 2011. 
 Miralles, D.G. et al. Global land-surface evaporation estimated from satellite-based observations. Hydrol. Earth Syst. Sci. 15, 453-469, 2011. 
 Mu, Q., Zhao, M. & Running, S.W. Improvements to a MODIS global terrestrial evapotranspiration algorithm. Remote Sens. Environ. 115, 1781-1800, 2011. 
 Mueller, B., Hirschi, M., Jimenez, C., Ciais, P., Dirmeyer, P. A., Dolman, A. J., Fisher, J. B., Jung, M., Ludwig, F., Maignan, F., Miralles, D. G., McCabe, M. F., Reichstein, M., Sheffield, J., Wang, K., Wood, E. F., Zhang, Y., and Seneviratne, S. I. (2013). Benchmark products for land evapotranspiration: LandFlux-EVAL multi-data set synthesis. Hydrology and Earth System Sciences, 17, 3707-3720. Mueller, B. et al. Benchmark products for land evapotranspiration: LandFlux-EVAL multi-dataset synthesis. Hydrol. Earth Syst. Sci. 17, 3707-3720, 2013. Su, Z. The Surface Energy Balance System (SEBS) for estimation of turbulent heat fluxes. Hydrol. Earth Syst. Sci. 6, 85-99, 2002.

  6. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    NASA Astrophysics Data System (ADS)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  7. Exploring three faint source detections methods for aperture synthesis radio images

    NASA Astrophysics Data System (ADS)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  8. The concept of template-based de novo design from drug-derived molecular fragments and its application to TAR RNA.

    PubMed

    Schüller, Andreas; Suhartono, Marcel; Fechner, Uli; Tanrikulu, Yusuf; Breitung, Sven; Scheffer, Ute; Göbel, Michael W; Schneider, Gisbert

    2008-02-01

    Principles of fragment-based molecular design are presented and discussed in the context of de novo drug design. The underlying idea is to dissect known drug molecules in fragments by straightforward pseudo-retro-synthesis. The resulting building blocks are then used for automated assembly of new molecules. A particular question has been whether this approach is actually able to perform scaffold-hopping. A prospective case study illustrates the usefulness of fragment-based de novo design for finding new scaffolds. We were able to identify a novel ligand disrupting the interaction between the Tat peptide and TAR RNA, which is part of the human immunodeficiency virus (HIV-1) mRNA. Using a single template structure (acetylpromazine) as reference molecule and a topological pharmacophore descriptor (CATS), new chemotypes were automatically generated by our de novo design software Flux. Flux features an evolutionary algorithm for fragment-based compound assembly and optimization. Pharmacophore superimposition and docking into the target RNA suggest perfect matching between the template molecule and the designed compound. Chemical synthesis was straightforward, and bioactivity of the designed molecule was confirmed in a FRET assay. This study demonstrates the practicability of de novo design to generating RNA ligands containing novel molecular scaffolds.

  9. A Practical Millimeter-Wave Holographic Imaging System with Tunable IF Attenuator

    NASA Astrophysics Data System (ADS)

    Zhu, Yu-Kun; Yang, Ming-Hui; Wu, Liang; Sun, Yun; Sun, Xiao-Wei

    2017-10-01

    A practical millimeter-wave (mmw) holographic imaging system with tunable intermediate frequency (IF) attenuator has been developed. It can be used for the detection of concealed weapons at security checkpoints, especially the airport. The system is utilized to scan the passenger and detect the weapons hidden in the clothes. To reconstruct the three dimensions (3-D) image, a holographic mmw imaging algorithm based on aperture synthesis and back scattering is presented. The system is active and works at 28-33 GHz. Tunable IF attenuator is applied to compensate the intensity and phase differences between multi-channels and multi-frequencies.

  10. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  11. Training-based descreening.

    PubMed

    Siddiqui, Hasib; Bouman, Charles A

    2007-03-01

    Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.

  12. Anytime synthetic projection: Maximizing the probability of goal satisfaction

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Bresina, John L.

    1990-01-01

    A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.

  13. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  14. Adaptive Electronic Camouflage Using Texture Synthesis

    DTIC Science & Technology

    2012-04-01

    algorithm begins by computing the GLCMs, GIN and GOUT , of the input image (e.g., image of local environment) and output image (randomly generated...respectively. The algorithm randomly selects a pixel from the output image and cycles its gray-level through all values. For each value, GOUT is updated...The value of the selected pixel is permanently changed to the gray-level value that minimizes the error between GIN and GOUT . Without selecting a

  15. Controle du vol longitudinal d'un avion civil avec satisfaction de qualiies de manoeuvrabilite

    NASA Astrophysics Data System (ADS)

    Saussie, David Alexandre

    2010-03-01

    Fulfilling handling qualities still remains a challenging problem during flight control design. These criteria of different nature are derived from a wide experience based upon flight tests and data analysis, and they have to be considered if one expects a good behaviour of the aircraft. The goal of this thesis is to develop synthesis methods able to satisfy these criteria with fixed classical architectures imposed by the manufacturer or with a new flight control architecture. This is applied to the longitudinal flight model of a Bombardier Inc. business jet aircraft, namely the Challenger 604. A first step of our work consists in compiling the most commonly used handling qualities in order to compare them. A special attention is devoted to the dropback criterion for which theoretical analysis leads us to establish a practical formulation for synthesis purpose. Moreover, the comparison of the criteria through a reference model highlighted dominant criteria that, once satisfied, ensure that other ones are satisfied too. Consequently, we are able to consider the fulfillment of these criteria in the fixed control architecture framework. Guardian maps (Saydy et al., 1990) are then considered to handle the problem. Initially for robustness study, they are integrated in various algorithms for controller synthesis. Incidently, this fixed architecture problem is similar to the static output feedback stabilization problem and reduced-order controller synthesis. Algorithms performing stabilization and pole assignment in a specific region of the complex plane are then proposed. Afterwards, they are extended to handle the gain-scheduling problem. The controller is then scheduled through the entire flight envelope with respect to scheduling parameters. Thereafter, the fixed architecture is put aside while only conserving the same output signals. The main idea is to use Hinfinity synthesis to obtain an initial controller satisfying handling qualities thanks to reference model pairing and robust versus mass and center of gravity variations. Using robust modal control (Magni, 2002), we are able to reduce substantially the controller order and to structure it in order to come close to a classical architecture. An auto-scheduling method finally allows us to schedule the controller with respect to scheduling parameters. Two different paths are used to solve the same problem; each one exhibits its own advantages and disadvantages.

  16. Time-frequency filtering and synthesis from convex projections

    NASA Astrophysics Data System (ADS)

    White, Langford B.

    1990-11-01

    This paper describes the application of the theory of projections onto convex sets to time-frequency filtering and synthesis problems. We show that the class of Wigner-Ville Distributions (WVD) of L2 signals form the boundary of a closed convex subset of L2(R2). This result is obtained by considering the convex set of states on the Heisenberg group of which the ambiguity functions form the extreme points. The form of the projection onto the set of WVDs is deduced. Various linear and non-linear filtering operations are incorporated by formulation as convex projections. An example algorithm for simultaneous time-frequency filtering and synthesis is suggested.

  17. Loss of the integral nuclear envelope protein SUN1 induces alteration of nucleoli

    PubMed Central

    Matsumoto, Ayaka; Sakamoto, Chiyomi; Matsumori, Haruka; Katahira, Jun; Yasuda, Yoko; Yoshidome, Katsuhide; Tsujimoto, Masahiko; Goldberg, Ilya G; Matsuura, Nariaki; Nakao, Mitsuyoshi; Saitoh, Noriko; Hieda, Miki

    2016-01-01

    ABSTRACT A supervised machine learning algorithm, which is qualified for image classification and analyzing similarities, is based on multiple discriminative morphological features that are automatically assembled during the learning processes. The algorithm is suitable for population-based analysis of images of biological materials that are generally complex and heterogeneous. Here we used the algorithm wndchrm to quantify the effects on nucleolar morphology of the loss of the components of nuclear envelope in a human mammary epithelial cell line. The linker of nucleoskeleton and cytoskeleton (LINC) complex, an assembly of nuclear envelope proteins comprising mainly members of the SUN and nesprin families, connects the nuclear lamina and cytoskeletal filaments. The components of the LINC complex are markedly deficient in breast cancer tissues. We found that a reduction in the levels of SUN1, SUN2, and lamin A/C led to significant changes in morphologies that were computationally classified using wndchrm with approximately 100% accuracy. In particular, depletion of SUN1 caused nucleolar hypertrophy and reduced rRNA synthesis. Further, wndchrm revealed a consistent negative correlation between SUN1 expression and the size of nucleoli in human breast cancer tissues. Our unbiased morphological quantitation strategies using wndchrm revealed an unexpected link between the components of the LINC complex and the morphologies of nucleoli that serves as an indicator of the malignant phenotype of breast cancer cells. PMID:26962703

  18. Loss of the integral nuclear envelope protein SUN1 induces alteration of nucleoli.

    PubMed

    Matsumoto, Ayaka; Sakamoto, Chiyomi; Matsumori, Haruka; Katahira, Jun; Yasuda, Yoko; Yoshidome, Katsuhide; Tsujimoto, Masahiko; Goldberg, Ilya G; Matsuura, Nariaki; Nakao, Mitsuyoshi; Saitoh, Noriko; Hieda, Miki

    2016-01-01

    A supervised machine learning algorithm, which is qualified for image classification and analyzing similarities, is based on multiple discriminative morphological features that are automatically assembled during the learning processes. The algorithm is suitable for population-based analysis of images of biological materials that are generally complex and heterogeneous. Here we used the algorithm wndchrm to quantify the effects on nucleolar morphology of the loss of the components of nuclear envelope in a human mammary epithelial cell line. The linker of nucleoskeleton and cytoskeleton (LINC) complex, an assembly of nuclear envelope proteins comprising mainly members of the SUN and nesprin families, connects the nuclear lamina and cytoskeletal filaments. The components of the LINC complex are markedly deficient in breast cancer tissues. We found that a reduction in the levels of SUN1, SUN2, and lamin A/C led to significant changes in morphologies that were computationally classified using wndchrm with approximately 100% accuracy. In particular, depletion of SUN1 caused nucleolar hypertrophy and reduced rRNA synthesis. Further, wndchrm revealed a consistent negative correlation between SUN1 expression and the size of nucleoli in human breast cancer tissues. Our unbiased morphological quantitation strategies using wndchrm revealed an unexpected link between the components of the LINC complex and the morphologies of nucleoli that serves as an indicator of the malignant phenotype of breast cancer cells.

  19. A decentralized linear quadratic control design method for flexible structures

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1990-01-01

    A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass and stiffness properties.

  20. Synthesis of Feedback Controller for Chaotic Systems by Means of Evolutionary Techniques

    NASA Astrophysics Data System (ADS)

    Senkerik, Roman; Oplatkova, Zuzana; Zelinka, Ivan; Davendra, Donald; Jasek, Roman

    2011-06-01

    This research deals with a synthesis of control law for three selected discrete chaotic systems by means of analytic programming. The novality of the approach is that a tool for symbolic regression—analytic programming—is used for such kind of difficult problem. The paper consists of the descriptions of analytic programming as well as chaotic systems and used cost function. For experimentation, Self-Organizing Migrating Algorithm (SOMA) with analytic programming was used.

  1. Digital Waveguide Architectures for Virtual Musical Instruments

    NASA Astrophysics Data System (ADS)

    Smith, Julius O.

    Digital sound synthesis has become a standard staple of modern music studios, videogames, personal computers, and hand-held devices. As processing power has increased over the years, sound synthesis implementations have evolved from dedicated chip sets, to single-chip solutions, and ultimately to software implementations within processors used primarily for other tasks (such as for graphics or general purpose computing). With the cost of implementation dropping closer and closer to zero, there is increasing room for higher quality algorithms.

  2. The MATCHIT Automaton: Exploiting Compartmentalization for the Synthesis of Branched Polymers

    PubMed Central

    Weyland, Mathias S.; Fellermann, Harold; Hadorn, Maik; Sorek, Daniel; Lancet, Doron; Rasmussen, Steen; Füchslin, Rudolf M.

    2013-01-01

    We propose an automaton, a theoretical framework that demonstrates how to improve the yield of the synthesis of branched chemical polymer reactions. This is achieved by separating substeps of the path of synthesis into compartments. We use chemical containers (chemtainers) to carry the substances through a sequence of fixed successive compartments. We describe the automaton in mathematical terms and show how it can be configured automatically in order to synthesize a given branched polymer target. The algorithm we present finds an optimal path of synthesis in linear time. We discuss how the automaton models compartmentalized structures found in cells, such as the endoplasmic reticulum and the Golgi apparatus, and we show how this compartmentalization can be exploited for the synthesis of branched polymers such as oligosaccharides. Lastly, we show examples of artificial branched polymers and discuss how the automaton can be configured to synthesize them with maximal yield. PMID:24489601

  3. Wave field synthesis, adaptive wave field synthesis and ambisonics using decentralized transformed control: Potential applications to sound field reproduction and active noise control

    NASA Astrophysics Data System (ADS)

    Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw

    2005-09-01

    Sound field reproduction finds applications in listening to prerecorded music or in synthesizing virtual acoustics. The objective is to recreate a sound field in a listening environment. Wave field synthesis (WFS) is a known open-loop technology which assumes that the reproduction environment is anechoic. Classical WFS, therefore, does not perform well in a real reproduction space such as room. Previous work has suggested that it is physically possible to reproduce a progressive wave field in-room situation using active control approaches. In this paper, a formulation of adaptive wave field synthesis (AWFS) introduces practical possibilities for an adaptive sound field reproduction combining WFS and active control (with WFS departure penalization) with a limited number of error sensors. AWFS includes WFS and closed-loop ``Ambisonics'' as limiting cases. This leads to the modification of the multichannel filtered-reference least-mean-square (FXLMS) and the filtered-error LMS (FELMS) adaptive algorithms for AWFS. Decentralization of AWFS for sound field reproduction is introduced on the basis of sources' and sensors' radiation modes. Such decoupling may lead to decentralized control of source strength distributions and may reduce computational burden of the FXLMS and the FELMS algorithms used for AWFS. [Work funded by NSERC, NATEQ, Université de Sherbrooke and VRQ.] Ultrasound/Bioresponse to

  4. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  5. AMICAL: An aid for architectural synthesis and exploration of control circuits

    NASA Astrophysics Data System (ADS)

    Park, Inhag

    AMICAL is an architectural synthesis system for control flow dominated circuits. A behavioral finite state machine specification, where the scheduling and register allocation were performed, is presented. An abstract architecture specification that may feed existing silicon compilers acting at the logic and register transfer levels is described. AMICAL consists of five main functions allowing automatic, interactive and manual synthesis, as well as the combination of these methods. These functions are a synthesizer, a graphics editor, a verifier, an evaluator, and a documentor. Automatic synthesis is achieved by algorithms that allocate both functional units, stored in an expandable user defined library, and connections. AMICAL also allows the designer to interrupt the synthesis process at any stage and make interactive modifications via a specially designed graphics editor. The user's modifications are verified and evaluated to ensure that no design rules are broken and that any imposed constraints are still met. A documentor provides the designer with status and feedback reports from the synthesis process.

  6. Parallel ICA and its hardware implementation in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Peterson, Gregory D.

    2004-04-01

    Advances in hyperspectral images have dramatically boosted remote sensing applications by providing abundant information using hundreds of contiguous spectral bands. However, the high volume of information also results in excessive computation burden. Since most materials have specific characteristics only at certain bands, a lot of these information is redundant. This property of hyperspectral images has motivated many researchers to study various dimensionality reduction algorithms, including Projection Pursuit (PP), Principal Component Analysis (PCA), wavelet transform, and Independent Component Analysis (ICA), where ICA is one of the most popular techniques. It searches for a linear or nonlinear transformation which minimizes the statistical dependence between spectral bands. Through this process, ICA can eliminate superfluous but retain practical information given only the observations of hyperspectral images. One hurdle of applying ICA in hyperspectral image (HSI) analysis, however, is its long computation time, especially for high volume hyperspectral data sets. Even the most efficient method, FastICA, is a very time-consuming process. In this paper, we present a parallel ICA (pICA) algorithm derived from FastICA. During the unmixing process, pICA divides the estimation of weight matrix into sub-processes which can be conducted in parallel on multiple processors. The decorrelation process is decomposed into the internal decorrelation and the external decorrelation, which perform weight vector decorrelations within individual processors and between cooperative processors, respectively. In order to further improve the performance of pICA, we seek hardware solutions in the implementation of pICA. Until now, there are very few hardware designs for ICA-related processes due to the complicated and iterant computation. This paper discusses capacity limitation of FPGA implementations for pICA in HSI analysis. A synthesis of Application-Specific Integrated Circuit (ASIC) is designed for pICA-based dimensionality reduction in HSI analysis. The pICA design is implemented using standard-height cells and aimed at TSMC 0.18 micron process. During the synthesis procedure, three ICA-related reconfigurable components are developed for the reuse and retargeting purpose. Preliminary results show that the standard-height cell based ASIC synthesis provide an effective solution for pICA and ICA-related processes in HSI analysis.

  7. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  8. Cavity parameters identification for TESLA control system development

    NASA Astrophysics Data System (ADS)

    Czarski, Tomasz; Pozniak, Krysztof T.; Romaniuk, Ryszard S.; Simrock, Stefan

    2005-08-01

    Aim of the control system development for TESLA cavity is a more efficient stabilization of the pulsed, accelerating EM field inside resonator. Cavity parameters identification is an essential task for the comprehensive control algorithm. TESLA cavity simulator has been successfully implemented using high-speed FPGA technology. Electromechanical model of the cavity resonator includes Lorentz force detuning and beam loading. The parameters identification is based on the electrical model of the cavity. The model is represented by state space equation for envelope of the cavity voltage driven by current generator and beam loading. For a given model structure, the over-determined matrix equation is created covering long enough measurement range with the solution according to the least-squares method. A low-degree polynomial approximation is applied to estimate the time-varying cavity detuning during the pulse. The measurement channel distortion is considered, leading to the external cavity model seen by the controller. The comprehensive algorithm of the cavity parameters identification was implemented in the Matlab system with different modes of operation. Some experimental results were presented for different cavity operational conditions. The following considerations have lead to the synthesis of the efficient algorithm for the cavity control system predicted for the potential FPGA technology implementation.

  9. Performance comparisons on spatial lattice algorithm and direct matrix inverse method with application to adaptive arrays processing

    NASA Technical Reports Server (NTRS)

    An, S. H.; Yao, K.

    1986-01-01

    Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.

  10. Algebraic Algorithm Design and Local Search

    DTIC Science & Technology

    1996-12-01

    method for performing algorithm design that is more purely algebraic than that of KIDS. This method is then applied to local search. Local search is a...synthesis. Our approach was to follow KIDS in spirit, but to adopt a pure algebraic formalism, supported by Kestrel’s SPECWARE environment (79), that...design was developed that is more purely algebraic than that of KIDS. This method was then applied to local search. A general theory of local search was

  11. Learning Grasp Context Distinctions that Generalize

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Grupen, Roderic A.; Fagg, Andrew H.

    2006-01-01

    Control-based approaches to grasp synthesis create grasping behavior by sequencing and combining control primitives. In the absence of any other structure, these approaches must evaluate a large number of feasible control sequences as a function of object shape, object pose, and task. This work explores a new approach to grasp synthesis that limits consideration to variations on a generalized localize-reach-grasp control policy. A new learning algorithm, known as schema structured learning, is used to learn which instantiations of the generalized policy are most likely to lead to a successful grasp in different problem contexts. Two experiments are described where Dexter, a bimanual upper torso, learns to select an appropriate grasp strategy as a function of object eccentricity and orientation. In addition, it is shown that grasp skills learned in this way can generalize to new objects. Results are presented showing that after learning how to grasp a small, representative set of objects, the robot's performance quantitatively improves for similar objects that it has not experienced before.

  12. Approximation, abstraction and decomposition in search and optimization

    NASA Technical Reports Server (NTRS)

    Ellman, Thomas

    1992-01-01

    In this paper, I discuss four different areas of my research. One portion of my research has focused on automatic synthesis of search control heuristics for constraint satisfaction problems (CSPs). I have developed techniques for automatically synthesizing two types of heuristics for CSPs: Filtering functions are used to remove portions of a search space from consideration. Another portion of my research is focused on automatic synthesis of hierarchic algorithms for solving constraint satisfaction problems (CSPs). I have developed a technique for constructing hierarchic problem solvers based on numeric interval algebra. Another portion of my research is focused on automatic decomposition of design optimization problems. We are using the design of racing yacht hulls as a testbed domain for this research. Decomposition is especially important in the design of complex physical shapes such as yacht hulls. Another portion of my research is focused on intelligent model selection in design optimization. The model selection problem results from the difficulty of using exact models to analyze the performance of candidate designs.

  13. FIVE YEARS OF SYNTHESIS OF SOLAR SPECTRAL IRRADIANCE FROM SDID/SISA AND SDO /AIA IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontenla, J. M.; Codrescu, M.; Fedrizzi, M.

    In this paper we describe the synthetic solar spectral irradiance (SSI) calculated from 2010 to 2015 using data from the Atmospheric Imaging Assembly (AIA) instrument, on board the Solar Dynamics Observatory spacecraft. We used the algorithms for solar disk image decomposition (SDID) and the spectral irradiance synthesis algorithm (SISA) that we had developed over several years. The SDID algorithm decomposes the images of the solar disk into areas occupied by nine types of chromospheric and 5 types of coronal physical structures. With this decomposition and a set of pre-computed angle-dependent spectra for each of the features, the SISA algorithm ismore » used to calculate the SSI. We discuss the application of the basic SDID/SISA algorithm to a subset of the AIA images and the observed variation occurring in the 2010–2015 period of the relative areas of the solar disk covered by the various solar surface features. Our results consist of the SSI and total solar irradiance variations over the 2010–2015 period. The SSI results include soft X-ray, ultraviolet, visible, infrared, and far-infrared observations and can be used for studies of the solar radiative forcing of the Earth’s atmosphere. These SSI estimates were used to drive a thermosphere–ionosphere physical simulation model. Predictions of neutral mass density at low Earth orbit altitudes in the thermosphere and peak plasma densities at mid-latitudes are in reasonable agreement with the observations. The correlation between the simulation results and the observations was consistently better when fluxes computed by SDID/SISA procedures were used.« less

  14. ACCESS 1: Approximation Concepts Code for Efficient Structural Synthesis program documentation and user's guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1976-01-01

    The program documentation and user's guide for the ACCESS-1 computer program is presented. ACCESS-1 is a research oriented program which implements a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and general mathematical programming algorithms are applied in the design optimization procedure. Implementation of the computer program, preparation of input data and basic program structure are described, and three illustrative examples are given.

  15. Virtual screening of inorganic materials synthesis parameters with deep learning

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Huang, Kevin; Jegelka, Stefanie; Olivetti, Elsa

    2017-12-01

    Virtual materials screening approaches have proliferated in the past decade, driven by rapid advances in first-principles computational techniques, and machine-learning algorithms. By comparison, computationally driven materials synthesis screening is still in its infancy, and is mired by the challenges of data sparsity and data scarcity: Synthesis routes exist in a sparse, high-dimensional parameter space that is difficult to optimize over directly, and, for some materials of interest, only scarce volumes of literature-reported syntheses are available. In this article, we present a framework for suggesting quantitative synthesis parameters and potential driving factors for synthesis outcomes. We use a variational autoencoder to compress sparse synthesis representations into a lower dimensional space, which is found to improve the performance of machine-learning tasks. To realize this screening framework even in cases where there are few literature data, we devise a novel data augmentation methodology that incorporates literature synthesis data from related materials systems. We apply this variational autoencoder framework to generate potential SrTiO3 synthesis parameter sets, propose driving factors for brookite TiO2 formation, and identify correlations between alkali-ion intercalation and MnO2 polymorph selection.

  16. Volume illustration of muscle from diffusion tensor images.

    PubMed

    Chen, Wei; Yan, Zhicheng; Zhang, Song; Crow, John Allen; Ebert, David S; McLaughlin, Ronald M; Mullins, Katie B; Cooper, Robert; Ding, Zi'ang; Liao, Jun

    2009-01-01

    Medical illustration has demonstrated its effectiveness to depict salient anatomical features while hiding the irrelevant details. Current solutions are ineffective for visualizing fibrous structures such as muscle, because typical datasets (CT or MRI) do not contain directional details. In this paper, we introduce a new muscle illustration approach that leverages diffusion tensor imaging (DTI) data and example-based texture synthesis techniques. Beginning with a volumetric diffusion tensor image, we reformulate it into a scalar field and an auxiliary guidance vector field to represent the structure and orientation of a muscle bundle. A muscle mask derived from the input diffusion tensor image is used to classify the muscle structure. The guidance vector field is further refined to remove noise and clarify structure. To simulate the internal appearance of the muscle, we propose a new two-dimensional example based solid texture synthesis algorithm that builds a solid texture constrained by the guidance vector field. Illustrating the constructed scalar field and solid texture efficiently highlights the global appearance of the muscle as well as the local shape and structure of the muscle fibers in an illustrative fashion. We have applied the proposed approach to five example datasets (four pig hearts and a pig leg), demonstrating plausible illustration and expressiveness.

  17. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  18. Assessment of Cardiovascular Disease Risk in South Asian Populations

    PubMed Central

    Hussain, S. Monira; Oldenburg, Brian; Zoungas, Sophia; Tonkin, Andrew M.

    2013-01-01

    Although South Asian populations have high cardiovascular disease (CVD) burden in the world, their patterns of individual CVD risk factors have not been fully studied. None of the available algorithms/scores to assess CVD risk have originated from these populations. To explore the relevance of CVD risk scores for these populations, literature search and qualitative synthesis of available evidence were performed. South Asians usually have higher levels of both “classical” and nontraditional CVD risk factors and experience these at a younger age. There are marked variations in risk profiles between South Asian populations. More than 100 risk algorithms are currently available, with varying risk factors. However, no available algorithm has included all important risk factors that underlie CVD in these populations. The future challenge is either to appropriately calibrate current risk algorithms or ideally to develop new risk algorithms that include variables that provide an accurate estimate of CVD risk. PMID:24163770

  19. Advanced rotorcraft control using parameter optimization

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters is presented. The algorithm is part of a design algorithm for an optimal linear dynamic output feedback controller that minimizes a finite time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed loop eigensystem. This approach through the use of a accurate Pade series approximation does not require the closed loop system matrix to be diagonalizable. The algorithm has been included in a control design package for optimal robust low order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed loop system under an arbitrary controller design initialization and during the numerical search.

  20. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.

  1. Implementation of 3-D isoparametric finite elements on supercomputer for the formulation of recursive dynamical equations of multi-body systems

    NASA Technical Reports Server (NTRS)

    Shareef, N. H.; Amirouche, F. M. L.

    1991-01-01

    A computational algorithmic procedure is developed and implemented for the dynamic analysis of a multibody system with rigid/flexible interconnected bodies. The algorithm takes into consideration the large rotation/translation and small elastic deformations associated with the rigid-body degrees of freedom and the flexibility of the bodies in the system respectively. Versatile three-dimensional isoparametric brick elements are employed for the modeling of the geometric configurations of the bodies. The formulation of the recursive dynamical equations of motion is based on the recursive Kane's equations, strain energy concepts, and the techniques of component mode synthesis. In order to minimize CPU-intensive matrix multiplication operations and speed up the execution process, the concepts of indexed arrays is utilized in the formulation of the equations of motion. A spin-up maneuver of a space robot with three flexible links carrying a solar panel is used as an illustrative example.

  2. Tensor methodology and computational geometry in direct computational experiments in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Julia

    2017-07-01

    The paper considers a generalized functional and algorithmic construction of direct computational experiments in fluid dynamics. Notation of tensor mathematics is naturally embedded in the finite - element operation in the construction of numerical schemes. Large fluid particle, which have a finite size, its own weight, internal displacement and deformation is considered as an elementary computing object. Tensor representation of computational objects becomes strait linear and uniquely approximation of elementary volumes and fluid particles inside them. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallelism. It is shown that advantages of the proposed approach are achieved among them by considering representation of large particles of a continuous medium motion in dual coordinate systems and computing operations in the projections of these two coordinate systems with direct and inverse transformations. So new method for mathematical representation and synthesis of computational experiment based on large particle method is proposed.

  3. Evaluation of Airborne l- Band Multi-Baseline Pol-Insar for dem Extraction Beneath Forest Canopy

    NASA Astrophysics Data System (ADS)

    Li, W. M.; Chen, E. X.; Li, Z. Y.; Jiang, C.; Jia, Y.

    2018-04-01

    DEM beneath forest canopy is difficult to extract with optical stereo pairs, InSAR and Pol-InSAR techniques. Tomographic SAR (TomoSAR) based on different penetration and view angles could reflect vertical structure and ground structure. This paper aims at evaluating the possibility of TomoSAR for underlying DEM extraction. Airborne L-band repeat-pass Pol-InSAR collected in BioSAR 2008 campaign was applied to reconstruct the 3D structure of forest. And sum of kronecker product and algebraic synthesis algorithm were used to extract ground structure, and phase linking algorithm was applied to estimate ground phase. Then Goldstein cut-branch approach was used to unwrap the phases and then estimated underlying DEM. The average difference between the extracted underlying DEM and Lidar DEM is about 3.39 m in our test site. And the result indicates that it is possible for underlying DEM estimation with airborne L-band repeat-pass TomoSAR technique.

  4. See-through Detection and 3D Reconstruction Using Terahertz Leaky-Wave Radar Based on Sparse Signal Processing

    NASA Astrophysics Data System (ADS)

    Murata, Koji; Murano, Kosuke; Watanabe, Issei; Kasamatsu, Akifumi; Tanaka, Toshiyuki; Monnai, Yasuaki

    2018-02-01

    We experimentally demonstrate see-through detection and 3D reconstruction using terahertz leaky-wave radar based on sparse signal processing. The application of terahertz waves to radar has received increasing attention in recent years for its potential to high-resolution and see-through detection. Among others, the implementation using a leaky-wave antenna is promising for compact system integration with beam steering capability based on frequency sweep. However, the use of a leaky-wave antenna poses a challenge on signal processing. Since a leaky-wave antenna combines the entire signal captured by each part of the aperture into a single output, the conventional array signal processing assuming access to a respective antenna element is not applicable. In this paper, we apply an iterative recovery algorithm "CoSaMP" to signals acquired with terahertz leaky-wave radar for clutter mitigation and aperture synthesis. We firstly demonstrate see-through detection of target location even when the radar is covered with an opaque screen, and therefore, the radar signal is disturbed by clutter. Furthermore, leveraging the robustness of the algorithm against noise, we also demonstrate 3D reconstruction of distributed targets by synthesizing signals collected from different orientations. The proposed approach will contribute to the smart implementation of terahertz leaky-wave radar.

  5. Airways, vasculature, and interstitial tissue: anatomically informed computational modeling of human lungs for virtual clinical trials

    NASA Astrophysics Data System (ADS)

    Abadi, Ehsan; Sturgeon, Gregory M.; Agasthya, Greeshma; Harrawood, Brian; Hoeschen, Christoph; Kapadia, Anuj; Segars, W. P.; Samei, Ehsan

    2017-03-01

    This study aimed to model virtual human lung phantoms including both non-parenchymal and parenchymal structures. Initial branches of the non-parenchymal structures (airways, arteries, and veins) were segmented from anatomical data in each lobe separately. A volume-filling branching algorithm was utilized to grow the higher generations of the airways and vessels to the level of terminal branches. The diameters of the airways and vessels were estimated using established relationships between flow rates and diameters. The parenchyma was modeled based on secondary pulmonary lobule units. Polyhedral shapes with variable sizes were modeled, and the borders were assigned to interlobular septa. A heterogeneous background was added inside these units using a non-parametric texture synthesis algorithm which was informed by a high-resolution CT lung specimen dataset. A voxelized based CT simulator was developed to create synthetic helical CT images of the phantom with different pitch values. Results showed the progressive degradation in depiction of lung details with increased pitch. Overall, the enhanced lung models combined with the XCAT phantoms prove to provide a powerful toolset to perform virtual clinical trials in the context of thoracic imaging. Such trials, not practical using clinical datasets or simplistic phantoms, can quantitatively evaluate and optimize advanced imaging techniques towards patient-based care.

  6. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  7. De novo design of molecular architectures by evolutionary assembly of drug-derived building blocks.

    PubMed

    Schneider, G; Lee, M L; Stahl, M; Schneider, P

    2000-07-01

    An evolutionary algorithm was developed for fragment-based de novo design of molecules (TOPAS, TOPology-Assigning System). This stochastic method aims at generating a novel molecular structure mimicking a template structure. A set of approximately 25,000 fragment structures serves as the building block supply, which were obtained by a straightforward fragmentation procedure applied to 36,000 known drugs. Eleven reaction schemes were implemented for both fragmentation and building block assembly. This combination of drug-derived building blocks and a restricted set of reaction schemes proved to be a key for the automatic development of novel, synthetically tractable structures. In a cyclic optimization process, molecular architectures were generated from a parent structure by virtual synthesis, and the best structure of a generation was selected as the parent for the subsequent TOPAS cycle. Similarity measures were used to define 'fitness', based on 2D-structural similarity or topological pharmacophore distance between the template molecule and the variants. The concept of varying library 'diversity' during a design process was consequently implemented by using adaptive variant distributions. The efficiency of the design algorithm was demonstrated for the de novo construction of potential thrombin inhibitors mimicking peptide and non-peptide template structures.

  8. Adaptive-gain fast super-twisting sliding mode fault tolerant control for a reusable launch vehicle in reentry phase.

    PubMed

    Zhang, Yao; Tang, Shengjing; Guo, Jie

    2017-11-01

    In this paper, a novel adaptive-gain fast super-twisting (AGFST) sliding mode attitude control synthesis is carried out for a reusable launch vehicle subject to actuator faults and unknown disturbances. According to the fast nonsingular terminal sliding mode surface (FNTSMS) and adaptive-gain fast super-twisting algorithm, an adaptive fault tolerant control law for the attitude stabilization is derived to protect against the actuator faults and unknown uncertainties. Firstly, a second-order nonlinear control-oriented model for the RLV is established by feedback linearization method. And on the basis a fast nonsingular terminal sliding mode (FNTSM) manifold is designed, which provides fast finite-time global convergence and avoids singularity problem as well as chattering phenomenon. Based on the merits of the standard super-twisting (ST) algorithm and fast reaching law with adaption, a novel adaptive-gain fast super-twisting (AGFST) algorithm is proposed for the finite-time fault tolerant attitude control problem of the RLV without any knowledge of the bounds of uncertainties and actuator faults. The important feature of the AGFST algorithm includes non-overestimating the values of the control gains and faster convergence speed than the standard ST algorithm. A formal proof of the finite-time stability of the closed-loop system is derived using the Lyapunov function technique. An estimation of the convergence time and accurate expression of convergence region are also provided. Finally, simulations are presented to illustrate the effectiveness and superiority of the proposed control scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Dynamic Task Assignment and Path Planning of Multi-AUV System Based on an Improved Self-Organizing Map and Velocity Synthesis Method in Three-Dimensional Underwater Workspace.

    PubMed

    Zhu, Daqi; Huang, Huan; Yang, S X

    2013-04-01

    For a 3-D underwater workspace with a variable ocean current, an integrated multiple autonomous underwater vehicle (AUV) dynamic task assignment and path planning algorithm is proposed by combing the improved self-organizing map (SOM) neural network and a novel velocity synthesis approach. The goal is to control a team of AUVs to reach all appointed target locations for only one time on the premise of workload balance and energy sufficiency while guaranteeing the least total and individual consumption in the presence of the variable ocean current. First, the SOM neuron network is developed to assign a team of AUVs to achieve multiple target locations in 3-D ocean environment. The working process involves special definition of the initial neural weights of the SOM network, the rule to select the winner, the computation of the neighborhood function, and the method to update weights. Then, the velocity synthesis approach is applied to plan the shortest path for each AUV to visit the corresponding target in a dynamic environment subject to the ocean current being variable and targets being movable. Lastly, to demonstrate the effectiveness of the proposed approach, simulation results are given in this paper.

  10. Depth assisted compression of full parallax light fields

    NASA Astrophysics Data System (ADS)

    Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.

    2015-03-01

    Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.

  11. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  12. Closer insight into the structure of moderate to densely branched comb polymers by combining modelling and linear rheological measurements.

    PubMed

    Ahmadi, Mostafa; Pioge, Sandie; Fustin, Charles-Andre; Gohy, Jean-Francois; van Ruymbeke, Evelyne

    2017-02-07

    Synthesis of combs with well-entangled backbones and long branches with high densities has always been a challenge. Steric hindrance frequently leads to coupling of chains and structural imperfections that cannot be easily distinguished by traditional characterization methods. Research studies have therefore tried to use a combination of different methods to obtain more information on the actual microstructures. In this work, a grafting-from approach is used to synthesize poly(n-butyl acrylate) combs using atom transfer radical polymerization (ATRP) in three steps including the synthesis of a backbone, cleavage of protecting groups and growth of side branches. We have compared the linear viscoelastic properties theoretically predicted by a time marching algorithm (TMA) tube based model with the measured rheological behaviour to provide a better insight into the actual microstructure formed during synthesis. For combs with branches smaller than an entanglement, no discernible hierarchical relaxation can be distinguished, while for those with longer branches, a high frequency plateau made by entangled branches can be separated from backbone's relaxation. Dilution of the backbone, after relaxation of side branches, may accelerate the final relaxation, while extra friction can delay it especially for longer branches. Such a comparison provides a better assessment of the microstructure formed in combs.

  13. Synthese de champs sonores adaptative

    NASA Astrophysics Data System (ADS)

    Gauthier, Philippe-Aubert

    La reproduction de champs acoustiques est une approche physique au probleme technologique de la spatialisation sonore. Cette these concerne l'aspect physique de la reproduction de champs acoustiques. L'objectif principal est l'amelioration de la reproduction de champs acoustiques par "synthese de champs acoustiques" ("Wave Field Synthesis", WFS), une approche connue, basee sur des hypotheses de champ libre, a l'aide du controle actif par l'ajout de capteurs de l'erreur de reproduction et d'une boucle fermee. Un premier chapitre technique (chapitre 4) expose les resultats d'appreciation objective de la WFS par simulations et mesures experimentales. L'effet indesirable de la salle de reproduction sur les qualites objectives de la WFS fut illustre. Une premiere question de recherche fut ensuite abordee (chapitre 5), a savoir s'il est possible de reproduire des champs progressifs en salle dans un paradigme physique de controle actif: cette possibilite fut prouvee. L'approche technique privilegiee, "synthese de champs adaptative" ("Adaptive Wave Field Synthesis" [AWFS]), fut definie, puis simulee (chapitre 6). Cette approche d'AWFS comporte une originalite en controle actif et en reproduction de champs acoustiques: la fonction cout quadratique representant la minimisation des erreurs de reproduction inclut une regularisation de Tikhonov avec solution a priori qui vient de la WFS. L'etude de l'AWFS a l'aide de la decomposition en valeurs singulieres (chapitre 7) a permis de comprendre les mecanismes propres a l'AWFS. C'est la deuxieme principale originalite de la these. L'algorithme FXLMS (LMS et reference filtree) est modifie pour l'AWFS (chapitre 8). Le decouplage du systeme par decomposition en valeurs singulieres est illustre dans le domaine du traitement de signal et l'AWFS basee sur le controle independant des modes de rayonnement est simulee (chapitre 8). Ce qui constitue la troisieme originalite principale de cette these. Ces simulations du traitement de signal montrent l'efficacite des algorithmes et la capacite de l'AWFS a attenuer les erreurs attribuables a des reflexions acoustiques. Le neuvieme chapitre presente des resultats experimentaux d'AWFS. L'objectif etait de valider la methode et d'evaluer les performances de l'AWFS. Un autre algorithme prometteur est aussi teste. Les resultats demontrent la bonne marche de l'AWFS et des algorithmes testes. Autant dans le cas de la reproduction de champs harmoniques que dans le cas de la reproduction de champs a large bande, l'AWFS reduit l'erreur de reproduction de la WFS et les effets indesirables causes par les lieux de reproduction.

  14. Self-tuning control of attitude and momentum management for the Space Station

    NASA Technical Reports Server (NTRS)

    Shieh, L. S.; Sunkel, J. W.; Yuan, Z. Z.; Zhao, X. M.

    1992-01-01

    This paper presents a hybrid state-space self-tuning design methodology using dual-rate sampling for suboptimal digital adaptive control of attitude and momentum management for the Space Station. This new hybrid adaptive control scheme combines an on-line recursive estimation algorithm for indirectly identifying the parameters of a continuous-time system from the available fast-rate sampled data of the inputs and states and a controller synthesis algorithm for indirectly finding the slow-rate suboptimal digital controller from the designed optimal analog controller. The proposed method enables the development of digitally implementable control algorithms for the robust control of Space Station Freedom with unknown environmental disturbances and slowly time-varying dynamics.

  15. Onboard Data Processors for Planetary Ice-Penetrating Sounding Radars

    NASA Astrophysics Data System (ADS)

    Tan, I. L.; Friesenhahn, R.; Gim, Y.; Wu, X.; Jordan, R.; Wang, C.; Clark, D.; Le, M.; Hand, K. P.; Plaut, J. J.

    2011-12-01

    Among the many concerns faced by outer planetary missions, science data storage and transmission hold special significance. Such missions must contend with limited onboard storage, brief data downlink windows, and low downlink bandwidths. A potential solution to these issues lies in employing onboard data processors (OBPs) to convert raw data into products that are smaller and closely capture relevant scientific phenomena. In this paper, we present the implementation of two OBP architectures for ice-penetrating sounding radars tasked with exploring Europa and Ganymede. Our first architecture utilizes an unfocused processing algorithm extended from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS, Jordan et. al. 2009). Compared to downlinking raw data, we are able to reduce data volume by approximately 100 times through OBP usage. To ensure the viability of our approach, we have implemented, simulated, and synthesized this architecture using both VHDL and Matlab models (with fixed-point and floating-point arithmetic) in conjunction with Modelsim. Creation of a VHDL model of our processor is the principle step in transitioning to actual digital hardware, whether in a FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit), and successful simulation and synthesis strongly indicate feasibility. In addition, we examined the tradeoffs faced in the OBP between fixed-point accuracy, resource consumption, and data product fidelity. Our second architecture is based upon a focused fast back projection (FBP) algorithm that requires a modest amount of computing power and on-board memory while yielding high along-track resolution and improved slope detection capability. We present an overview of the algorithm and details of our implementation, also in VHDL. With the appropriate tradeoffs, the use of OBPs can significantly reduce data downlink requirements without sacrificing data product fidelity. Through the development, simulation, and synthesis of two different OBP architectures, we have proven the feasibility and efficacy of an OBP for planetary ice-penetrating radars.

  16. [Study of algorithms to identify schizophrenia in the SNIIRAM database conducted by the REDSIAM network].

    PubMed

    Quantin, C; Collin, C; Frérot, M; Besson, J; Cottenet, J; Corneloup, M; Soudry-Faure, A; Mariet, A-S; Roussot, A

    2017-10-01

    The aim of the REDSIAM network is to foster communication between users of French medico-administrative databases and to validate and promote analysis methods suitable for the data. Within this network, the working group "Mental and behavioral disorders" took an interest in algorithms to identify adult schizophrenia in the SNIIRAM database and inventoried identification criteria for patients with schizophrenia in these databases. The methodology was based on interviews with nine experts in schizophrenia concerning the procedures they use to identify patients with schizophrenia disorders in databases. The interviews were based on a questionnaire and conducted by telephone. The synthesis of the interviews showed that the SNIIRAM contains various tables which allow coders to identify patients suffering from schizophrenia: chronic disease status, drugs and hospitalizations. Taken separately, these criteria were not sufficient to recognize patients with schizophrenia, an algorithm should be based on all of them. Apparently, only one-third of people living with schizophrenia benefit from the longstanding disease status. Not all patients are hospitalized, and coding for diagnoses at the hospitalization, notably for short stays in medicine, surgery or obstetrics departments, is not exhaustive. As for treatment with antipsychotics, it is not specific enough as such treatments are also prescribed to patients with bipolar disorders, or even other disorders. It seems appropriate to combine these complementary criteria, while keeping in mind out-patient care (every year 80,000 patients are seen exclusively in an outpatient setting), even if these data are difficult to link with other information. Finally, the experts made three propositions for selection algorithms of patients with schizophrenia. Patients with schizophrenia can be relatively accurately identified using SNIIRAM data. Different combinations of the selected criteria must be used depending on the objectives and they must be related to an appropriate length of time. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  17. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.

  18. Materials Discovery | Photovoltaic Research | NREL

    Science.gov Websites

    and specialized analysis algorithms. The Center for Next Generation of Materials by Design (CNGMD) is , incorporating metastable materials into predictive design, and developing theory to guide materials synthesis design, accuracy and relevance, metastability, and synthesizability-to make computational materials

  19. Synthesis of coupled resonator optical waveguides by cavity aggregation.

    PubMed

    Muñoz, Pascual; Doménech, José David; Capmany, José

    2010-01-18

    In this paper, the layer aggregation method is applied to coupled resonator optical waveguides. Starting from the frequency transfer function, the method yields the coupling constants between the resonators. The convergence of the algorithm developed is examined and the related parameters discussed.

  20. Motion Planning and Synthesis of Human-Like Characters in Constrained Environments

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjun; Pan, Jia; Manocha, Dinesh

    We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.

  1. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.

    PubMed

    Jung, Sang-Kyu; McDonald, Karen

    2011-08-16

    Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.

  2. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization

    PubMed Central

    2011-01-01

    Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353

  3. MinGenome: An In Silico Top-Down Approach for the Synthesis of Minimized Genomes.

    PubMed

    Wang, Lin; Maranas, Costas D

    2018-02-16

    Genome minimized strains offer advantages as production chassis by reducing transcriptional cost, eliminating competing functions and limiting unwanted regulatory interactions. Existing approaches for identifying stretches of DNA to remove are largely ad hoc based on information on presumably dispensable regions through experimentally determined nonessential genes and comparative genomics. Here we introduce a versatile genome reduction algorithm MinGenome that implements a mixed-integer linear programming (MILP) algorithm to identify in size descending order all dispensable contiguous sequences without affecting the organism's growth or other desirable traits. Known essential genes or genes that cause significant fitness or performance loss can be flagged and their deletion can be prohibited. MinGenome also preserves needed transcription factors and promoter regions ensuring that retained genes will be properly transcribed while also avoiding the simultaneous deletion of synthetic lethal pairs. The potential benefit of removing even larger contiguous stretches of DNA if only one or two essential genes (to be reinserted elsewhere) are within the deleted sequence is explored. We applied the algorithm to design a minimized E. coli strain and found that we were able to recapitulate the long deletions identified in previous experimental studies and discover alternative combinations of deletions that have not yet been explored in vivo.

  4. Wide field imaging problems in radio astronomy

    NASA Astrophysics Data System (ADS)

    Cornwell, T. J.; Golap, K.; Bhatnagar, S.

    2005-03-01

    The new generation of synthesis radio telescopes now being proposed, designed, and constructed face substantial problems in making images over wide fields of view. Such observations are required either to achieve the full sensitivity limit in crowded fields or for surveys. The Square Kilometre Array (SKA Consortium, Tech. Rep., 2004), now being developed by an international consortium of 15 countries, will require advances well beyond the current state of the art. We review the theory of synthesis radio telescopes for large fields of view. We describe a new algorithm, W projection, for correcting the non-coplanar baselines aberration. This algorithm has improved performance over those previously used (typically an order of magnitude in speed). Despite the advent of W projection, the computing hardware required for SKA wide field imaging is estimated to cost up to $500M (2015 dollars). This is about half the target cost of the SKA. Reconfigurable computing is one way in which the costs can be decreased dramatically.

  5. Swarm intelligence for multi-objective optimization of synthesis gas production

    NASA Astrophysics Data System (ADS)

    Ganesan, T.; Vasant, P.; Elamvazuthi, I.; Ku Shaari, Ku Zilati

    2012-11-01

    In the chemical industry, the production of methanol, ammonia, hydrogen and higher hydrocarbons require synthesis gas (or syn gas). The main three syn gas production methods are carbon dioxide reforming (CRM), steam reforming (SRM) and partial-oxidation of methane (POM). In this work, multi-objective (MO) optimization of the combined CRM and POM was carried out. The empirical model and the MO problem formulation for this combined process were obtained from previous works. The central objectives considered in this problem are methane conversion, carbon monoxide selectivity and the hydrogen to carbon monoxide ratio. The MO nature of the problem was tackled using the Normal Boundary Intersection (NBI) method. Two techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) were then applied in conjunction with the NBI method. The performance of the two algorithms and the quality of the solutions were gauged by using two performance metrics. Comparative studies and results analysis were then carried out on the optimization results.

  6. Quantum dot ternary-valued full-adder: Logic synthesis by a multiobjective design optimization based on a genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klymenko, M. V.; Remacle, F., E-mail: fremacle@ulg.ac.be

    2014-10-28

    A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables formore » the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.« less

  7. Finding fixed satellite service orbital allotments with a k-permutation algorithm

    NASA Technical Reports Server (NTRS)

    Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.

    1990-01-01

    A satellite system synthesis problem, the satellite location problem (SLP), is addressed. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the fixed satellite service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: the problem of ordering the satellites and the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, has been developed to find solutions to SLPs. Solutions to small sample problems are presented and analyzed on the basis of calculated interferences.

  8. Enhanced Self Tuning On-Board Real-Time Model (eSTORM) for Aircraft Engine Performance Health Tracking

    NASA Technical Reports Server (NTRS)

    Volponi, Al; Simon, Donald L. (Technical Monitor)

    2008-01-01

    A key technological concept for producing reliable engine diagnostics and prognostics exploits the benefits of fusing sensor data, information, and/or processing algorithms. This report describes the development of a hybrid engine model for a propulsion gas turbine engine, which is the result of fusing two diverse modeling methodologies: a physics-based model approach and an empirical model approach. The report describes the process and methods involved in deriving and implementing a hybrid model configuration for a commercial turbofan engine. Among the intended uses for such a model is to enable real-time, on-board tracking of engine module performance changes and engine parameter synthesis for fault detection and accommodation.

  9. Cybersemiotics: a transdisciplinary framework for information studies.

    PubMed

    Brier, S

    1998-04-01

    This paper summarizes recent attempts by this author to create a transdisciplinary, non-Cartesian and non-reductionistic framework for information studies in natural, social, and technological systems. To confront, in a scientific way, the problems of modern information technology where phenomenological man is dealing with socially constructed texts in algorithmically based digital bit-machines we need a theoretical framework spanning from physics over biology and technological design to phenomenological and social production of signification and meaning. I am working with such pragmatic theories as second order cybernetics (coupled with autopolesis theory), Lakoffs biologically oriented cognitive semantics, Peirce's triadic semiotics, and Wittgenstein's pragmatic language game theory. A coherent synthesis of these theories is what the cybersemiotic framework attempts to accomplish.

  10. Image Reconstruction in Radio Astronomy with Non-Coplanar Synthesis Arrays

    NASA Astrophysics Data System (ADS)

    Goodrick, L.

    2015-03-01

    Traditional radio astronomy imaging techniques assume that the interferometric array is coplanar, with a small field of view, and that the two-dimensional Fourier relationship between brightness and visibility remains valid, allowing the Fast Fourier Transform to be used. In practice, to acquire more accurate data, the non-coplanar baseline effects need to be incorporated, as small height variations in the array plane introduces the w spatial frequency component. This component adds an additional phase shift to the incoming signals. There are two approaches to account for the non-coplanar baseline effects: either the full three-dimensional brightness and visibility model can be used to reconstruct an image, or the non-coplanar effects can be removed, reducing the three dimensional relationship to that of the two-dimensional one. This thesis describes and implements the w-projection and w-stacking algorithms. The aim of these algorithms is to account for the phase error introduced by non-coplanar synthesis arrays configurations, making the recovered visibilities more true to the actual brightness distribution model. This is done by reducing the 3D visibilities to a 2D visibility model. The algorithms also have the added benefit of wide-field imaging, although w-stacking supports a wider field of view at the cost of more FFT bin support. For w-projection, the w-term is accounted for in the visibility domain by convolving it out of the problem with a convolution kernel, allowing the use of the two-dimensional Fast Fourier Transform. Similarly, the w-Stacking algorithm applies a phase correction in the image domain to image layers to produce an intensity model that accounts for the non-coplanar baseline effects. This project considers the KAT7 array for simulation and analysis of the limitations and advantages of both the algorithms. Additionally, a variant of the Högbom CLEAN algorithm was used which employs contour trimming for extended source emission flagging. The CLEAN algorithm is an iterative two-dimensional deconvolution method that can further improve image fidelity by removing the effects of the point spread function which can obscure source data.

  11. Thin-film designs by simulated annealing

    NASA Astrophysics Data System (ADS)

    Boudet, T.; Chaton, P.; Herault, L.; Gonon, G.; Jouanet, L.; Keller, P.

    1996-11-01

    With the increasing power of computers, new methods in synthesis of optical multilayer systems have appeared. Among these, the simulated-annealing algorithm has proved its efficiency in several fields of physics. We propose to show its performances in the field of optical multilayer systems through different filter designs.

  12. A DNA-based semantic fusion model for remote sensing data.

    PubMed

    Sun, Heng; Weng, Jian; Yu, Guangchuang; Massawe, Richard H

    2013-01-01

    Semantic technology plays a key role in various domains, from conversation understanding to algorithm analysis. As the most efficient semantic tool, ontology can represent, process and manage the widespread knowledge. Nowadays, many researchers use ontology to collect and organize data's semantic information in order to maximize research productivity. In this paper, we firstly describe our work on the development of a remote sensing data ontology, with a primary focus on semantic fusion-driven research for big data. Our ontology is made up of 1,264 concepts and 2,030 semantic relationships. However, the growth of big data is straining the capacities of current semantic fusion and reasoning practices. Considering the massive parallelism of DNA strands, we propose a novel DNA-based semantic fusion model. In this model, a parallel strategy is developed to encode the semantic information in DNA for a large volume of remote sensing data. The semantic information is read in a parallel and bit-wise manner and an individual bit is converted to a base. By doing so, a considerable amount of conversion time can be saved, i.e., the cluster-based multi-processes program can reduce the conversion time from 81,536 seconds to 4,937 seconds for 4.34 GB source data files. Moreover, the size of result file recording DNA sequences is 54.51 GB for parallel C program compared with 57.89 GB for sequential Perl. This shows that our parallel method can also reduce the DNA synthesis cost. In addition, data types are encoded in our model, which is a basis for building type system in our future DNA computer. Finally, we describe theoretically an algorithm for DNA-based semantic fusion. This algorithm enables the process of integration of the knowledge from disparate remote sensing data sources into a consistent, accurate, and complete representation. This process depends solely on ligation reaction and screening operations instead of the ontology.

  13. A DNA-Based Semantic Fusion Model for Remote Sensing Data

    PubMed Central

    Sun, Heng; Weng, Jian; Yu, Guangchuang; Massawe, Richard H.

    2013-01-01

    Semantic technology plays a key role in various domains, from conversation understanding to algorithm analysis. As the most efficient semantic tool, ontology can represent, process and manage the widespread knowledge. Nowadays, many researchers use ontology to collect and organize data's semantic information in order to maximize research productivity. In this paper, we firstly describe our work on the development of a remote sensing data ontology, with a primary focus on semantic fusion-driven research for big data. Our ontology is made up of 1,264 concepts and 2,030 semantic relationships. However, the growth of big data is straining the capacities of current semantic fusion and reasoning practices. Considering the massive parallelism of DNA strands, we propose a novel DNA-based semantic fusion model. In this model, a parallel strategy is developed to encode the semantic information in DNA for a large volume of remote sensing data. The semantic information is read in a parallel and bit-wise manner and an individual bit is converted to a base. By doing so, a considerable amount of conversion time can be saved, i.e., the cluster-based multi-processes program can reduce the conversion time from 81,536 seconds to 4,937 seconds for 4.34 GB source data files. Moreover, the size of result file recording DNA sequences is 54.51 GB for parallel C program compared with 57.89 GB for sequential Perl. This shows that our parallel method can also reduce the DNA synthesis cost. In addition, data types are encoded in our model, which is a basis for building type system in our future DNA computer. Finally, we describe theoretically an algorithm for DNA-based semantic fusion. This algorithm enables the process of integration of the knowledge from disparate remote sensing data sources into a consistent, accurate, and complete representation. This process depends solely on ligation reaction and screening operations instead of the ontology. PMID:24116207

  14. Case-based synthesis in automatic advertising creation system

    NASA Astrophysics Data System (ADS)

    Zhuang, Yueting; Pan, Yunhe

    1995-08-01

    Advertising (ads) is an important design area. Though many interactive ad-design softwares have come into commercial use, none of them ever support the intelligent work -- automatic ad creation. The potential for this is enormous. This paper gives a description of our current work in research of an automatic advertising creation system (AACS). After careful analysis of the mental behavior of a human ad designer, we conclude that case-based approach is appropriate to its intelligent modeling. A model for AACS is given in the paper. A case in ads is described as two parts: the creation process and the configuration of the ads picture, with detailed data structures given in the paper. Along with the case representation, we put forward an algorithm. Some issues such as similarity measure computing, and case adaptation have also been discussed.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rongle Zhang; Jie Chang; Yuanyuan Xu

    A new kinetic model of the Fischer-Tropsch synthesis (FTS) is proposed to describe the non-Anderson-Schulz-Flory (ASF) product distribution. The model is based on the double-polymerization monomers hypothesis, in which the surface C{sub 2}{asterisk} species acts as a chain-growth monomer in the light-product range, while C{sub 1}{asterisk} species acts as a chain-growth monomer in the heavy-product range. The detailed kinetic model in the Langmuir-Hinshelwood-Hougen-Watson type based on the elementary reactions is derived for FTS and the water-gas-shift reaction. Kinetic model candidates are evaluated by minimization of multiresponse objective functions with a genetic algorithm approach. The model of hydrocarbon product distribution ismore » consistent with experimental data (

  16. Modeling and Composing Scenario-Based Requirements with Aspects

    NASA Technical Reports Server (NTRS)

    Araujo, Joao; Whittle, Jon; Ki, Dae-Kyoo

    2004-01-01

    There has been significant recent interest, within the Aspect-Oriented Software Development (AOSD) community, in representing crosscutting concerns at various stages of the software lifecycle. However, most of these efforts have concentrated on the design and implementation phases. We focus in this paper on representing aspects during use case modeling. In particular, we focus on scenario-based requirements and show how to compose aspectual and non-aspectual scenarios so that they can be simulated as a whole. Non-aspectual scenarios are modeled as UML sequence diagram. Aspectual scenarios are modeled as Interaction Pattern Specifications (IPS). In order to simulate them, the scenarios are transformed into a set of executable state machines using an existing state machine synthesis algorithm. Previous work composed aspectual and non-aspectual scenarios at the sequence diagram level. In this paper, the composition is done at the state machine level.

  17. Biomimetic molecular design tools that learn, evolve, and adapt.

    PubMed

    Winkler, David A

    2017-01-01

    A dominant hallmark of living systems is their ability to adapt to changes in the environment by learning and evolving. Nature does this so superbly that intensive research efforts are now attempting to mimic biological processes. Initially this biomimicry involved developing synthetic methods to generate complex bioactive natural products. Recent work is attempting to understand how molecular machines operate so their principles can be copied, and learning how to employ biomimetic evolution and learning methods to solve complex problems in science, medicine and engineering. Automation, robotics, artificial intelligence, and evolutionary algorithms are now converging to generate what might broadly be called in silico-based adaptive evolution of materials. These methods are being applied to organic chemistry to systematize reactions, create synthesis robots to carry out unit operations, and to devise closed loop flow self-optimizing chemical synthesis systems. Most scientific innovations and technologies pass through the well-known "S curve", with slow beginning, an almost exponential growth in capability, and a stable applications period. Adaptive, evolving, machine learning-based molecular design and optimization methods are approaching the period of very rapid growth and their impact is already being described as potentially disruptive. This paper describes new developments in biomimetic adaptive, evolving, learning computational molecular design methods and their potential impacts in chemistry, engineering, and medicine.

  18. Biomimetic molecular design tools that learn, evolve, and adapt

    PubMed Central

    2017-01-01

    A dominant hallmark of living systems is their ability to adapt to changes in the environment by learning and evolving. Nature does this so superbly that intensive research efforts are now attempting to mimic biological processes. Initially this biomimicry involved developing synthetic methods to generate complex bioactive natural products. Recent work is attempting to understand how molecular machines operate so their principles can be copied, and learning how to employ biomimetic evolution and learning methods to solve complex problems in science, medicine and engineering. Automation, robotics, artificial intelligence, and evolutionary algorithms are now converging to generate what might broadly be called in silico-based adaptive evolution of materials. These methods are being applied to organic chemistry to systematize reactions, create synthesis robots to carry out unit operations, and to devise closed loop flow self-optimizing chemical synthesis systems. Most scientific innovations and technologies pass through the well-known “S curve”, with slow beginning, an almost exponential growth in capability, and a stable applications period. Adaptive, evolving, machine learning-based molecular design and optimization methods are approaching the period of very rapid growth and their impact is already being described as potentially disruptive. This paper describes new developments in biomimetic adaptive, evolving, learning computational molecular design methods and their potential impacts in chemistry, engineering, and medicine. PMID:28694872

  19. From non-preemptive to preemptive scheduling using synchronization synthesis.

    PubMed

    Černý, Pavol; Clarke, Edmund M; Henzinger, Thomas A; Radhakrishna, Arjun; Ryzhyk, Leonid; Samanta, Roopsha; Tarrach, Thorsten

    2017-01-01

    We present a computer-aided programming approach to concurrency. The approach allows programmers to program assuming a friendly, non-preemptive scheduler, and our synthesis procedure inserts synchronization to ensure that the final program works even with a preemptive scheduler. The correctness specification is implicit, inferred from the non-preemptive behavior. Let us consider sequences of calls that the program makes to an external interface. The specification requires that any such sequence produced under a preemptive scheduler should be included in the set of sequences produced under a non-preemptive scheduler. We guarantee that our synthesis does not introduce deadlocks and that the synchronization inserted is optimal w.r.t. a given objective function. The solution is based on a finitary abstraction, an algorithm for bounded language inclusion modulo an independence relation, and generation of a set of global constraints over synchronization placements. Each model of the global constraints set corresponds to a correctness-ensuring synchronization placement. The placement that is optimal w.r.t. the given objective function is chosen as the synchronization solution. We apply the approach to device-driver programming, where the driver threads call the software interface of the device and the API provided by the operating system. Our experiments demonstrate that our synthesis method is precise and efficient. The implicit specification helped us find one concurrency bug previously missed when model-checking using an explicit, user-provided specification. We implemented objective functions for coarse-grained and fine-grained locking and observed that different synchronization placements are produced for our experiments, favoring a minimal number of synchronization operations or maximum concurrency, respectively.

  20. Simulations of Aperture Synthesis Imaging Radar for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, C.; Belyey, V.

    2012-12-01

    EISCAT_3D is a project to build the next generation of incoherent scatter radars endowed with multiple 3-dimensional capabilities that will replace the current EISCAT radars in Northern Scandinavia. Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. To demonstrate the feasibility of the antenna configurations and the imaging inversion algorithms a simulation of synthetic incoherent scattering data has been performed. The simulation algorithm incorporates the ability to control the background plasma parameters with non-homogeneous, non-stationary components over an extended 3-dimensional space. Control over the positions of a number of separated receiving antennas, their signal-to-noise-ratios and arriving phases allows realistic simulation of a multi-baseline interferometric imaging radar system. The resulting simulated data is fed into various inversion algorithms. This simulation package is a powerful tool to evaluate various antenna configurations and inversion algorithms. Results applied to realistic design alternatives of EISCAT_3D will be described.

  1. Comparison of Analysis Results Between 2D/1D Synthesis and RAPTOR-M3G in the Korea Standard Nuclear Plant (KSNP)

    NASA Astrophysics Data System (ADS)

    Joung Lim, Mi; Maeng, Young Jae; Fero, Arnold H.; Anderson, Stanwood L.

    2016-02-01

    The 2D/1D synthesis methodology has been used to calculate the fast neutron (E > 1.0 MeV) exposure to the beltline region of the reactor pressure vessel. This method uses the DORT 3.1 discrete ordinates code and the BUGLE-96 cross-section library based on ENDF/B-VI. RAPTOR-M3G (RApid Parallel Transport Of Radiation-Multiple 3D Geometries) which performs full 3D calculations was developed and is based on domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architecture. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor. Both methods are applied to surveillance test results for the Korea Standard Nuclear Plant (KSNP)-OPR (Optimized Power Reactor) 1000 MW. The objective of this paper is to compare the results of the KSNP surveillance program between 2D/1D synthesis and RAPTOR-M3G. Each operating KSNP has a reactor vessel surveillance program consisting of six surveillance capsules located between the core and the reactor vessel in the downcomer region near the reactor vessel wall. In addition to the In-Vessel surveillance program, an Ex-Vessel Neutron Dosimetry (EVND) program has been implemented. In order to estimate surveillance test results, cycle-specific forward transport calculations were performed by 2D/1D synthesis and by RAPTOR-M3G. The ratio between measured and calculated (M/C) reaction rates will be discussed. The current plan is to install an EVND system in all of the Korea PWRs including the new reactor type, APR (Advanced Power Reactor) 1400 MW. This work will play an important role in establishing a KSNP-specific database of surveillance test results and will employ RAPTOR-M3G for surveillance dosimetry location as well as positions in the KSNP reactor vessel.

  2. The adaptive observer. [liapunov synthesis, single-input single-output, and reduced observers

    NASA Technical Reports Server (NTRS)

    Carroll, R. L.

    1973-01-01

    The simple generation of state from available measurements, for use in systems for which the criteria defining the acceptable state behavior mandates a control that is dependent upon unavailable measurement is described as an adaptive means for determining the state of a linear time invariant differential system having unknown parameters. A single input output adaptive observer and the reduced adaptive observer is developed. The basic ideas for both the adaptive observer and the nonadaptive observer are examined. A survey of the Liapunov synthesis technique is taken, and the technique is applied to adaptive algorithm for the adaptive observer.

  3. On Problem of Synthesis of Control System for Quadrocopter

    NASA Astrophysics Data System (ADS)

    Larin, V. B.; Tunik, A. A.

    2017-05-01

    An algorithm for designing a control for a quadrocopter is given. Two cases of control of the horizontal motion of a vehicle are considered. Terminal location is given in one case, and cruise speed is given in the other case. The results are compared with those obtained by other authors

  4. Using the Pearson Distribution for Synthesis of the Suboptimal Algorithms for Filtering Multi-Dimensional Markov Processes

    NASA Astrophysics Data System (ADS)

    Mit'kin, A. S.; Pogorelov, V. A.; Chub, E. G.

    2015-08-01

    We consider the method of constructing the suboptimal filter on the basis of approximating the a posteriori probability density of the multidimensional Markov process by the Pearson distributions. The proposed method can efficiently be used for approximating asymmetric, excessive, and finite densities.

  5. Evolutionary Optimization of a Quadrifilar Helical Antenna

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Linden, Derek S.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Automated antenna synthesis via evolutionary design has recently garnered much attention in the research literature. Evolutionary algorithms show promise because, among search algorithms, they are able to effectively search large, unknown design spaces. NASA's Mars Odyssey spacecraft is due to reach final Martian orbit insertion in January, 2002. Onboard the spacecraft is a quadrifilar helical antenna that provides telecommunications in the UHF band with landed assets, such as robotic rovers. Each helix is driven by the same signal which is phase-delayed in 90 deg increments. A small ground plane is provided at the base. It is designed to operate in the frequency band of 400-438 MHz. Based on encouraging previous results in automated antenna design using evolutionary search, we wanted to see whether such techniques could improve upon Mars Odyssey antenna design. Specifically, a co-evolutionary genetic algorithm is applied to optimize the gain and size of the quadrifilar helical antenna. The optimization was performed in-situ in the presence of a neighboring spacecraft structure. On the spacecraft, a large aluminum fuel tank is adjacent to the antenna. Since this fuel tank can dramatically affect the antenna's performance, we leave it to the evolutionary process to see if it can exploit the fuel tank's properties advantageously. Optimizing in the presence of surrounding structures would be quite difficult for human antenna designers, and thus the actual antenna was designed for free space (with a small ground plane). In fact, when flying on the spacecraft, surrounding structures that are moveable (e.g., solar panels) may be moved during the mission in order to improve the antenna's performance.

  6. Deep learning for computational chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less

  7. On the utilization of engineering knowledge in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, P.

    1984-01-01

    Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.

  8. [GIS and scenario analysis aid to water pollution control planning of river basin].

    PubMed

    Wang, Shao-ping; Cheng, Sheng-tong; Jia, Hai-feng; Ou, Zhi-dan; Tan, Bin

    2004-07-01

    The forward and backward algorithms for watershed water pollution control planning were summarized in this paper as well as their advantages and shortages. The spatial databases of water environmental function region, pollution sources, monitoring sections and sewer outlets were built with ARCGIS8.1 as the platform in the case study of Ganjiang valley, Jiangxi province. Based on the principles of the forward algorithm, four scenarios were designed for the watershed pollution control. Under these scenarios, ten sets of planning schemes were generated to implement cascade pollution source control. The investment costs of sewage treatment for these schemes were estimated by means of a series of cost-effective functions; with pollution source prediction, the water quality was modeled with CSTR model for each planning scheme. The modeled results of different planning schemes were visualized through GIS to aid decision-making. With the results of investment cost and water quality attainment as decision-making accords and based on the analysis of the economic endurable capacity for water pollution control in Ganjiang river basin, two optimized schemes were proposed. The research shows that GIS technology and scenario analysis can provide a good guidance to the synthesis, integrity and sustainability aspects for river basin water quality planning.

  9. Comparison of algorithms for determination of rotation measure and Faraday structure. I. 1100–1400 MHz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, X. H.; Akahori, Takuya; Anderson, C. S.

    2015-02-01

    Faraday rotation measures (RMs) and more general Faraday structures are key parameters for studying cosmic magnetism and are also sensitive probes of faint ionized thermal gas. A definition of which derived quantities are required for various scientific studies is needed, as well as addressing the challenges in determining Faraday structures. A wide variety of algorithms has been proposed to reconstruct these structures. In preparation for the Polarization Sky Survey of the Universe's Magnetism (POSSUM) to be conducted with the Australian Square Kilometre Array Pathfinder and the ongoing Galactic Arecibo L-band Feeds Array Continuum Transit Survey (GALFACTS), we run a Faradaymore » structure determination data challenge to benchmark the currently available algorithms, including Faraday synthesis (previously called RM synthesis in the literature), wavelet, compressive sampling, and QU-fitting. The input models include sources with one Faraday thin component, two Faraday thin components, and one Faraday thick component. The frequency set is similar to POSSUM/GALFACTS with a 300 MHz bandwidth from 1.1 to 1.4 GHz. We define three figures of merit motivated by the underlying science: (1) an average RM weighted by polarized intensity, RM{sub wtd}, (2) the separation Δϕ of two Faraday components, and (3) the reduced chi-squared χ{sub r}{sup 2}. Based on the current test data with a signal-to-noise ratio of about 32, we find the following. (1) When only one Faraday thin component is present, most methods perform as expected, with occasional failures where two components are incorrectly found. (2) For two Faraday thin components, QU-fitting routines perform the best, with errors close to the theoretical ones for RM{sub wtd} but with significantly higher errors for Δϕ. All other methods, including standard Faraday synthesis, frequently identify only one component when Δϕ is below or near the width of the Faraday point-spread function. (3) No methods as currently implemented work well for Faraday thick components due to the narrow bandwidth. (4) There exist combinations of two Faraday components that produce a large range of acceptable fits and hence large uncertainties in the derived single RMs; in these cases, different RMs lead to the same Q, U behavior, so no method can recover a unique input model. Further exploration of all these issues is required before upcoming surveys will be able to provide reliable results on Faraday structures.« less

  10. A k-permutation algorithm for Fixed Satellite Service orbital allotments

    NASA Technical Reports Server (NTRS)

    Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.

    1988-01-01

    A satellite system synthesis problem, the satellite location problem (SLP), is addressed in this paper. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the Fixed Satellite Service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: (1) the problem of ordering the satellites and (2) the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, that has been developed to find solutions to SLPs formulated in the manner suggested is described. Solutions to small example problems are presented and analyzed.

  11. Multisource passive acoustic tracking: an application of random finite set data fusion

    NASA Astrophysics Data System (ADS)

    Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung

    2010-04-01

    Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.

  12. Synthesis of vibroarthrographic signals in knee osteoarthritis diagnosis training.

    PubMed

    Shieh, Chin-Shiuh; Tseng, Chin-Dar; Chang, Li-Yun; Lin, Wei-Chun; Wu, Li-Fu; Wang, Hung-Yu; Chao, Pei-Ju; Chiu, Chien-Liang; Lee, Tsair-Fwu

    2016-07-19

    Vibroarthrographic (VAG) signals are used as useful indicators of knee osteoarthritis (OA) status. The objective was to build a template database of knee crepitus sounds. Internships can practice in the template database to shorten the time of training for diagnosis of OA. A knee sound signal was obtained using an innovative stethoscope device with a goniometer. Each knee sound signal was recorded with a Kellgren-Lawrence (KL) grade. The sound signal was segmented according to the goniometer data. The signal was Fourier transformed on the correlated frequency segment. An inverse Fourier transform was performed to obtain the time-domain signal. Haar wavelet transform was then done. The median and mean of the wavelet coefficients were chosen to inverse transform the synthesized signal in each KL category. The quality of the synthesized signal was assessed by a clinician. The sample signals were evaluated using different algorithms (median and mean). The accuracy rate of the median coefficient algorithm (93 %) was better than the mean coefficient algorithm (88 %) for cross-validation by a clinician using synthesis of VAG. The artificial signal we synthesized has the potential to build a learning system for medical students, internships and para-medical personnel for the diagnosis of OA. Therefore, our method provides a feasible way to evaluate crepitus sounds that may assist in the diagnosis of knee OA.

  13. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal scales for the 2005-2007 reference period will be disclosed. The skill of these algorithms to close the water balance over the continents will be assessed by comparisons to runoff data. The consistency in forcing data will allow to (a) evaluate the skill of these five algorithms in producing ET over particular ecosystems, (b) facilitate the attribution of the observed differences to either algorithms or driving data, and (c) set up a solid scientific basis for the development of global long-term benchmark ET products. Project progress can be followed on our website http://wacmoset.estellus.eu. REFERENCES Fisher, J. B., Tu, K.P., and Baldocchi, D.D. Global estimates of the land-atmosphere water flux based on monthly AVHRR and ISLSCP-II data, validated at 16 FLUXNET sites. Remote Sens. Environ. 112, 901-919, 2008. Jiménez, C. et al. Global intercomparison of 12 land surface heat flux estimates. J. Geophys. Res. 116, D02102, 2011. Jung, M. et al. Recent decline in the global land evapotranspiration trend due to limited moisture supply. Nature 467, 951-954, 2010. Miralles, D.G. et al. Global land-surface evaporation estimated from satellite-based observations. Hydrol. Earth Syst. Sci. 15, 453-469, 2011. Mu, Q., Zhao, M. & Running, S.W. Improvements to a MODIS global terrestrial evapotranspiration algorithm. Remote Sens. Environ. 115, 1781-1800, 2011. Mueller, B. et al. Benchmark products for land evapotranspiration: LandFlux-EVAL multi- dataset synthesis. Hydrol. Earth Syst. Sci. 17, 3707-3720, 2013. Su, Z. The Surface Energy Balance System (SEBS) for estimation of turbulent heat fluxes. Hydrol. Earth Syst. Sci. 6, 85-99, 2002.

  14. Design and pilot evaluation of a system to develop computer-based site-specific practice guidelines from decision models.

    PubMed

    Sanders, G D; Nease, R F; Owens, D K

    2000-01-01

    Local tailoring of clinical practice guidelines (CPGs) requires experts in medicine and evidence synthesis unavailable in many practice settings. The authors' computer-based system enables developers and users to create, disseminate, and tailor CPGs, using normative decision models (DMs). ALCHEMIST, a web-based system, analyzes a DM, creates a CPG in the form of an annotated algorithm, and displays for the guideline user the optimal strategy. ALCHEMIST'S interface enables remote users to tailor the guideline by changing underlying input variables and observing the new annotated algorithm that is developed automatically. In a pilot evaluation of the system, a DM was used to evaluate strategies for staging non-small-cell lung cancer. Subjects (n = 15) compared the automatically created CPG with published guidelines for this staging and critiqued both using a previously developed instrument to rate the CPGs' usability, accountability, and accuracy on a scale of 0 (worst) to 2 (best), with higher scores reflecting higher quality. The mean overall score for the ALCHEMIST CPG was 1.502, compared with the published-CPG score of 0.987 (p = 0.002). The ALCHEMIST CPG scores for usability, accountability, and accuracy were 1.683, 1.393, and 1.430, respectively; the published CPG scores were 1.192, 0.941, and 0.830 (each comparison p < 0.05). On a scale of 1 (worst) to 5 (best), users' mean ratings of ALCHEMIST'S ease of use, usefulness of content, and presentation format were 4.76, 3.98, and 4.64, respectively. The results demonstrate the feasibility of a web-based system that automatically analyzes a DM and creates a CPG as an annotated algorithm, enabling remote users to develop site-specific CPGs. In the pilot evaluation, the ALCHEMIST guidelines met established criteria for quality and compared favorably with national CPGs. The high usability and usefulness ratings suggest that such systems can be a good tool for guideline development.

  15. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  16. ParticleCall: A particle filter for base calling in next-generation sequencing systems

    PubMed Central

    2012-01-01

    Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067

  17. Optimized FPGA Implementation of the Thyroid Hormone Secretion Mechanism Using CAD Tools.

    PubMed

    Alghazo, Jaafar M

    2017-02-01

    The goal of this paper is to implement the secretion mechanism of the Thyroid Hormone (TH) based on bio-mathematical differential eqs. (DE) on an FPGA chip. Hardware Descriptive Language (HDL) is used to develop a behavioral model of the mechanism derived from the DE. The Thyroid Hormone secretion mechanism is simulated with the interaction of the related stimulating and inhibiting hormones. Synthesis of the simulation is done with the aid of CAD tools and downloaded on a Field Programmable Gate Arrays (FPGAs) Chip. The chip output shows identical behavior to that of the designed algorithm through simulation. It is concluded that the chip mimics the Thyroid Hormone secretion mechanism. The chip, operating in real-time, is computer-independent stand-alone system.

  18. Architecture-driven reuse of code in KASE

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    In order to support the synthesis of large, complex software systems, we need to focus on issues pertaining to the architectural design of a system in addition to algorithm and data structure design. An approach that is based on abstracting the architectural design of a set of problems in the form of a generic architecture, and providing tools that can be used to instantiate the generic architecture for specific problem instances is presented. Such an approach also facilitates reuse of code between different systems belonging to the same problem class. An application of our approach on a realistic problem is described; the results of the exercise are presented; and how our approach compares to other work in this area is discussed.

  19. Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.; Wang, C. W.

    1986-01-01

    Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, we design an experiment and solve the same problem under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, we present results of practical, rather than statistical, significance. Our implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than our implementation of a gradient search procedure does with our objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.

  20. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  1. Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.

    1986-01-01

    Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, an experiment is designed and the same problem is solved under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, results of practical, rather than statistical, significance are presented. Implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than implementation of a gradient search procedure does with the objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.

  2. A synthesis of light absorption properties of the Arctic Ocean: application to semianalytical estimates of dissolved organic carbon concentrations from space

    NASA Astrophysics Data System (ADS)

    Matsuoka, A.; Babin, M.; Doxaran, D.; Hooker, S. B.; Mitchell, B. G.; Bélanger, S.; Bricaud, A.

    2014-06-01

    In addition to scattering coefficients, the light absorption coefficients of particulate and dissolved materials are the main factors determining the light propagation of the visible part of the spectrum and are, thus, important for developing ocean color algorithms. While these absorption properties have recently been documented by a few studies for the Arctic Ocean (e.g., Matsuoka et al., 2007, 2011; Ben Mustapha et al., 2012), the data sets used in the literature were sparse and individually insufficient to draw a general view of the basin-wide spatial and temporal variations in absorption. To achieve such a task, we built a large absorption database of the Arctic Ocean by pooling the majority of published data sets and merging new data sets. Our results show that the total nonwater absorption coefficients measured in the eastern Arctic Ocean (EAO; Siberian side) are significantly higher than in the western Arctic Ocean (WAO; North American side). This higher absorption is explained by higher concentration of colored dissolved organic matter (CDOM) in watersheds on the Siberian side, which contains a large amount of dissolved organic carbon (DOC) compared to waters off North America. In contrast, the relationship between the phytoplankton absorption (aϕ(λ)) and chlorophyll a (chl a) concentration in the EAO was not significantly different from that in the WAO. Because our semianalytical CDOM absorption algorithm is based on chl a-specific aϕ(λ) values (Matsuoka et al., 2013), this result indirectly suggests that CDOM absorption can be appropriately derived not only for the WAO but also for the EAO using ocean color data. Based on statistics, derived CDOM absorption values were reasonable compared to in situ measurements. By combining this algorithm with empirical DOC versus CDOM relationships, a semianalytical algorithm for estimating DOC concentrations for river-influenced coastal waters of the Arctic Ocean is presented and applied to satellite ocean color data.

  3. Attitude guidance and simulation with animation of a land-survey satellite motion

    NASA Astrophysics Data System (ADS)

    Somova, Tatyana

    2017-01-01

    We consider problems of synthesis of the vector spline attitude guidance laws for a land-survey satellite and an in-flight support of the satellite attitude control system with the use of computer animation of its motion. We have presented the results on the efficiency of the developed algorithms.

  4. A CLEAN-based method for mosaic deconvolution

    NASA Astrophysics Data System (ADS)

    Gueth, F.; Guilloteau, S.; Viallefond, F.

    1995-03-01

    Mosaicing may be used in aperture synthesis to map large fields of view. So far, only MEM techniques have been used to deconvolve mosaic images (Cornwell (1988)). A CLEAN-based method has been developed, in which the components are located in a modified expression. This allows a better utilization of the information and consequent noise reduction in the overlapping regions. Simulations show that this method gives correct clean maps and recovers most of the flux of the sources. The introduction of the short-spacing visibilities in the data set is strongly required. Their absence actually introduces artificial lack of structures on the corresponding scale in the mosaic images. The formation of ``stripes'' in clean maps may also occur, but this phenomenon can be significantly reduced by using the Steer-Dewdney-Ito algorithm (Steer, Dewdney & Ito (1984)) to identify the CLEAN components. Typical IRAM interferometer pointing errors do not have a significant effect on the reconstructed images.

  5. Image multiplexing and authentication based on double phase retrieval in fresnel transform domain

    NASA Astrophysics Data System (ADS)

    Chang, Hsuan-Ting; Lin, Che-Hsian; Chen, Chien-Yue

    2017-04-01

    An image multiplexing and authentication method based on the double-phase retrieval algorithm (DPRA) with the manipulations of wavelength and position in the Fresnel transform (FrT) domain is proposed in this study. The DPRA generates two matched phase-only functions (POFs) in the different planes so that the corresponding image can be reconstructed at the output plane. Given a number of target images, all the sets of matched POFs are used to generate the phase-locked system through the phase modulation and synthesis to achieve the multiplexing purpose. To reconstruct a target image, the corresponding phase key and all the correct parameters in the FrT are required. Therefore, the authentication system with high-level security can be achieved. The computer simulation verifies the validity of the proposed method and also shows good resistance to the crosstalk among the reconstructed images.

  6. Results of PRISMA/FFIORD extended mission and applicability to future formation flying and active debris removal missions

    NASA Astrophysics Data System (ADS)

    Delpech, Michel; Berges, Jean-Claude; Karlsson, Thomas; Malbet, Fabien

    2013-07-01

    CNES performed several experiments during the extended PRISMA mission which started in August 2011. A first session in October 2011 addressed two objectives: 1) demonstrate angles-only navigation to rendezvous with a non-cooperative object; 2) exercise transitions between RF-based and vision-based control during final formation acquisition. A complementary experiment in September 2012 mimicked some future astrometry mission and implemented the manoeuvres required to point the two satellite axis to a celestial target and maintain it fixed during some observation period. In the first sections, the paper presents the experiment motivations, describes its main design features including the guidance and control algorithms evolutions and provides a synthesis of the most significant results along with a discussion of the lessons learned. In the last part, the paper evokes the applicability of these experiment results to some active debris removal mission concept that is currently being studied.

  7. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    NASA Astrophysics Data System (ADS)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  8. Combinatorial microfluidic droplet engineering for biomimetic material synthesis

    PubMed Central

    Bawazer, Lukmaan A.; McNally, Ciara S.; Empson, Christopher J.; Marchant, William J.; Comyn, Tim P.; Niu, Xize; Cho, Soongwon; McPherson, Michael J.; Binks, Bernard P.; deMello, Andrew; Meldrum, Fiona C.

    2016-01-01

    Although droplet-based systems are used in a wide range of technologies, opportunities for systematically customizing their interface chemistries remain relatively unexplored. This article describes a new microfluidic strategy for rapidly tailoring emulsion droplet compositions and properties. The approach uses a simple platform for screening arrays of droplet-based microfluidic devices and couples this with combinatorial selection of the droplet compositions. Through the application of genetic algorithms over multiple screening rounds, droplets with target properties can be rapidly generated. The potential of this method is demonstrated by creating droplets with enhanced stability, where this is achieved by selecting carrier fluid chemistries that promote titanium dioxide formation at the droplet interfaces. The interface is a mixture of amorphous and crystalline phases, and the resulting composite droplets are biocompatible, supporting in vitro protein expression in their interiors. This general strategy will find widespread application in advancing emulsion properties for use in chemistry, biology, materials, and medicine. PMID:27730209

  9. Intercomparison of mid latitude storm diagnostics (IMILAST) - synthesis of project results

    NASA Astrophysics Data System (ADS)

    Neu, Urs

    2017-04-01

    The analysis of the occurrence of mid-latitude storms is of great socio-economical interest due to their vast and destructive impacts. However, a unique definition of cyclones is missing, and therefore the definition of what a cyclone is as well as quantifying its strength contains subjective choices. Existing automatic cyclone identification and tracking algorithms are based on different definitions and use diverse characteristics, e.g. data transformation, metrics used for cyclone identification, cyclone identification procedures or tracking methods. The project IMILAST systematically compares different cyclone detection and tracking methods, with the aim to comprehensively assess the influence of different algorithms on cyclone climatologies, temporal trends of frequency, strength or other characteristics of cyclones and thus quantify systematic uncertainties in mid-latitudinal storm identification and tracking. The three main intercomparison experiments used the ERA-interim reanalysis as a common input data set and focused on differences between the methods with respect to number, track density, life cycle characteristics, and trend patterns on the one hand and potential differences of the long-term climate change signal of cyclonic activity between the methods on the other hand. For the third experiment, the intercomparison period has been extended to a 30 year period from 1979 to 2009 and focuses on more specific aspects, such as parameter sensitivities, the comparison of automated to manual tracking sets, regional analysis (regional trends, Arctic and Antarctic cyclones, cyclones in the Mediterranean) or specific phenomena like splitting and merging of cyclones. In addition, the representation of storms and their characteristics in reanalysis data sets is examined to further enhance the knowledge on uncertainties related to storm occurrence. This poster presents a synthesis of the main results from the intercomparison activities within IMILAST.

  10. New development of the image matching algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqiang; Feng, Zhao

    2018-04-01

    To study the image matching algorithm, algorithm four elements are described, i.e., similarity measurement, feature space, search space and search strategy. Four common indexes for evaluating the image matching algorithm are described, i.e., matching accuracy, matching efficiency, robustness and universality. Meanwhile, this paper describes the principle of image matching algorithm based on the gray value, image matching algorithm based on the feature, image matching algorithm based on the frequency domain analysis, image matching algorithm based on the neural network and image matching algorithm based on the semantic recognition, and analyzes their characteristics and latest research achievements. Finally, the development trend of image matching algorithm is discussed. This study is significant for the algorithm improvement, new algorithm design and algorithm selection in practice.

  11. In Silico Synthesis of Synthetic Receptors: A Polymerization Algorithm.

    PubMed

    Cowen, Todd; Busato, Mirko; Karim, Kal; Piletsky, Sergey A

    2016-12-01

    Molecularly imprinted polymer (MIP) synthetic receptors have proposed and applied applications in chemical extraction, sensors, assays, catalysis, targeted drug delivery, and direct inhibition of harmful chemicals and pathogens. However, they rely heavily on effective design for success. An algorithm has been written which mimics radical polymerization atomistically, accounting for chemical and spatial discrimination, hybridization, and geometric optimization. Synthetic ephedrine receptors were synthesized in silico to demonstrate the accuracy of the algorithm in reproducing polymers structures at the atomic level. Comparative analysis in the design of a synthetic ephedrine receptor demonstrates that the new method can effectively identify affinity trends and binding site selectivities where commonly used alternative methods cannot. This new method is believed to generate the most realistic models of MIPs thus produced. This suggests that the algorithm could be a powerful new tool in the design and analysis of various polymers, including MIPs, with significant implications in areas of biotechnology, biomimetics, and the materials sciences more generally. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Noise suppression methods for robust speech processing

    NASA Astrophysics Data System (ADS)

    Boll, S. F.; Ravindra, H.; Randall, G.; Armantrout, R.; Power, R.

    1980-05-01

    Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during this reporting period for the research program funded to develop real time, compressed speech analysis synthesis algorithms whose performance in invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the current research and results in the areas of noise suppression using the dual input adaptive noise cancellation using the short time Fourier transform algorithms, articulation rate change techniques, and a description of an experiment which demonstrated that the spectral subtraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC 10 coded, helicopter speech by 10.6 point.

  13. Mechanism synthesis and 2-D control designs of an active three cable crane

    NASA Technical Reports Server (NTRS)

    Yang, Li-Farn; Mikulas, Martin M., Jr.

    1992-01-01

    A Lunar Crane with a suspension system based on a three cable mechanism is investigated to provide a stable end-effector for hoisting, positioning, and assembling large components during construction and servicing of a Lunar Base. The three cable suspension mechanism consists of a structural framework of three cables pointing to a common point that closely coincides with the suspended payload's center of gravity. The vibrational characteristics of this three cable suspension system are investigated by comparing a simple 2-D symmetric suspension model and a swinging pendulum in terms of their analytical natural frequency equations. A study is also made of actively controlling the dynamics of the crane using two different actuator concepts. Also, Lyapunov-based control algorithms are developed to determine two regulator-type control laws performing the system vibrational suppression for both system dynamics. Simulations including initial-valued dynamic responses as well as control performances for two different system dynamics are also presented.

  14. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  15. One Approach to the Synthesis, Design and Manufacture of Hyperboloid Gear Sets With Face Mating Gears. Part 1: Basic Theoretical and Cad Experience

    NASA Astrophysics Data System (ADS)

    Abadjiev, Valentin; Abadjieva, Emilia

    2016-06-01

    Hyperboloid gear drives with face mating gears are used to transform rotations between shafts with non-parallel and non-intersecting axes. A special case of these transmissions are Spiroid and Helicon gear drives. The classical gear drives of this type are the Archimedean ones. The objective of this study are hyperboloid gear drives with face meshing, when the pinion possesses threads of conic convolute, Archimedean and involute types, or the pinion has threads of cylindrical convolute, Archimedean and involute types. For simplicity, all three types transmis- sions with face mating gears and a conic pinion are titled Spiroid and all three types transmissions with face mating gears and a cylindrical pinion are titled Helicon. Principles of the mathematical modelling of tooth contact synthesis are discussed in this study. The presented research shows that the synthesis is realized by application of two mathematical models: pitch contact point and mesh region models. Two approaches for synthesis of the gear drives in accordance with Olivier's principles are illustrated. The algorithms and computer programs for optimization synthesis and design of the studied hyperboloid gear drives are presented.

  16. Parallel algorithm of VLBI software correlator under multiprocessor environment

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Zhang, Dong

    2007-11-01

    The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.

  17. A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1989-01-01

    Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.

  18. AutoClickChem: click chemistry in silico.

    PubMed

    Durrant, Jacob D; McCammon, J Andrew

    2012-01-01

    Academic researchers and many in industry often lack the financial resources available to scientists working in "big pharma." High costs include those associated with high-throughput screening and chemical synthesis. In order to address these challenges, many researchers have in part turned to alternate methodologies. Virtual screening, for example, often substitutes for high-throughput screening, and click chemistry ensures that chemical synthesis is fast, cheap, and comparatively easy. Though both in silico screening and click chemistry seek to make drug discovery more feasible, it is not yet routine to couple these two methodologies. We here present a novel computer algorithm, called AutoClickChem, capable of performing many click-chemistry reactions in silico. AutoClickChem can be used to produce large combinatorial libraries of compound models for use in virtual screens. As the compounds of these libraries are constructed according to the reactions of click chemistry, they can be easily synthesized for subsequent testing in biochemical assays. Additionally, in silico modeling of click-chemistry products may prove useful in rational drug design and drug optimization. AutoClickChem is based on the pymolecule toolbox, a framework that may facilitate the development of future python-based programs that require the manipulation of molecular models. Both the pymolecule toolbox and AutoClickChem are released under the GNU General Public License version 3 and are available for download from http://autoclickchem.ucsd.edu.

  19. AutoClickChem: Click Chemistry in Silico

    PubMed Central

    Durrant, Jacob D.; McCammon, J. Andrew

    2012-01-01

    Academic researchers and many in industry often lack the financial resources available to scientists working in “big pharma.” High costs include those associated with high-throughput screening and chemical synthesis. In order to address these challenges, many researchers have in part turned to alternate methodologies. Virtual screening, for example, often substitutes for high-throughput screening, and click chemistry ensures that chemical synthesis is fast, cheap, and comparatively easy. Though both in silico screening and click chemistry seek to make drug discovery more feasible, it is not yet routine to couple these two methodologies. We here present a novel computer algorithm, called AutoClickChem, capable of performing many click-chemistry reactions in silico. AutoClickChem can be used to produce large combinatorial libraries of compound models for use in virtual screens. As the compounds of these libraries are constructed according to the reactions of click chemistry, they can be easily synthesized for subsequent testing in biochemical assays. Additionally, in silico modeling of click-chemistry products may prove useful in rational drug design and drug optimization. AutoClickChem is based on the pymolecule toolbox, a framework that may facilitate the development of future python-based programs that require the manipulation of molecular models. Both the pymolecule toolbox and AutoClickChem are released under the GNU General Public License version 3 and are available for download from http://autoclickchem.ucsd.edu. PMID:22438795

  20. Multi-atlas-based CT synthesis from conventional MRI with patch-based refinement for MRI-based radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Lee, Junghoon; Carass, Aaron; Jog, Amod; Zhao, Can; Prince, Jerry L.

    2017-02-01

    Accurate CT synthesis, sometimes called electron density estimation, from MRI is crucial for successful MRI-based radiotherapy planning and dose computation. Existing CT synthesis methods are able to synthesize normal tissues but are unable to accurately synthesize abnormal tissues (i.e., tumor), thus providing a suboptimal solution. We propose a multiatlas- based hybrid synthesis approach that combines multi-atlas registration and patch-based synthesis to accurately synthesize both normal and abnormal tissues. Multi-parametric atlas MR images are registered to the target MR images by multi-channel deformable registration, from which the atlas CT images are deformed and fused by locally-weighted averaging using a structural similarity measure (SSIM). Synthetic MR images are also computed from the registered atlas MRIs by using the same weights used for the CT synthesis; these are compared to the target patient MRIs allowing for the assessment of the CT synthesis fidelity. Poor synthesis regions are automatically detected based on the fidelity measure and refined by a patch-based synthesis. The proposed approach was tested on brain cancer patient data, and showed a noticeable improvement for the tumor region.

  1. Multi-atlas-based CT synthesis from conventional MRI with patch-based refinement for MRI-based radiotherapy planning.

    PubMed

    Lee, Junghoon; Carass, Aaron; Jog, Amod; Zhao, Can; Prince, Jerry L

    2017-02-01

    Accurate CT synthesis, sometimes called electron density estimation, from MRI is crucial for successful MRI-based radiotherapy planning and dose computation. Existing CT synthesis methods are able to synthesize normal tissues but are unable to accurately synthesize abnormal tissues (i.e., tumor), thus providing a suboptimal solution. We propose a multi-atlas-based hybrid synthesis approach that combines multi-atlas registration and patch-based synthesis to accurately synthesize both normal and abnormal tissues. Multi-parametric atlas MR images are registered to the target MR images by multi-channel deformable registration, from which the atlas CT images are deformed and fused by locally-weighted averaging using a structural similarity measure (SSIM). Synthetic MR images are also computed from the registered atlas MRIs by using the same weights used for the CT synthesis; these are compared to the target patient MRIs allowing for the assessment of the CT synthesis fidelity. Poor synthesis regions are automatically detected based on the fidelity measure and refined by a patch-based synthesis. The proposed approach was tested on brain cancer patient data, and showed a noticeable improvement for the tumor region.

  2. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  3. Replacing critical rare earth materials in high energy density magnets

    NASA Astrophysics Data System (ADS)

    McCallum, R. William

    2012-02-01

    High energy density permanent magnets are crucial to the design of internal permanent magnet motors (IPM) for hybride and electric vehicles and direct drive wind generators. Current motor designs use rare earth permanent magnets which easily meet the performance goals, however, the rising concerns over cost and foreign control of the current supply of rare earth resources has motivated a search for non-rare earth based permanent magnets alloys with performance metrics which allow the design of permanent magnet motors and generators without rare earth magnets. This talk will discuss the state of non-rare-earth permanent magnets and efforts to both improve the current materials and find new materials. These efforts combine first principles calculations and meso-scale magnetic modeling with advance characterization and synthesis techniques in order to advance the state of the art in non rare earth permanent magnets. The use of genetic algorithms in first principle structural calculations, combinatorial synthesis in the experimental search for materials, atom probe microscopy to characterize grain boundaries on the atomic level, and other state of the art techniques will be discussed. In addition the possibility of replacing critical rare earth elements with the most abundant rare earth Ce will be discussed.

  4. Design and investigation of potential Sn-Te-P and Zr-Te-P class of Dirac materials

    NASA Astrophysics Data System (ADS)

    Sarswat, Prashant; Sarkar, Sayan; Free, Michael

    A motivation of new Dirac materials design and synthesis by perturbing the symmetry, was explored by substitution of a Sn vacancy by P that maintains the intrinsic band inversion at the L point but also the direct bandgap shrinkage upon the incorporation of spin-orbit coupling. In a similar line of investigation, Zr-Te-P was also systematically studied. The synthesis of both Sn-Te-P and Zr-Te-P system of compounds resulted in the formation of long needles type crystals and the bulk porous deposits. The exotic morphology of the P-doped SnTe needles possesses the pierced surface throughout its extension. First principle based calculations were also carried out for these sets of compounds using General Gradient Approximation (GGA) with Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional. In order to ensure structural optimization, a limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm was employed and the total energy in PBE exchange-correlation functional was considered for the calculation of the formation energy per atom. The new modifications have a potential to establish the new class of Dirac materials ushering upon new frontiers of interest.

  5. A hardware experimental platform for neural circuits in the auditory cortex

    NASA Astrophysics Data System (ADS)

    Rodellar-Biarge, Victoria; García-Dominguez, Pablo; Ruiz-Rizaldos, Yago; Gómez-Vilda, Pedro

    2011-05-01

    Speech processing in the human brain is a very complex process far from being fully understood although much progress has been done recently. Neuromorphic Speech Processing is a new research orientation in bio-inspired systems approach to find solutions to automatic treatment of specific problems (recognition, synthesis, segmentation, diarization, etc) which can not be adequately solved using classical algorithms. In this paper a neuromorphic speech processing architecture is presented. The systematic bottom-up synthesis of layered structures reproduce the dynamic feature detection of speech related to plausible neural circuits which work as interpretation centres located in the Auditory Cortex. The elementary model is based on Hebbian neuron-like units. For the computation of the architecture a flexible framework is proposed in the environment of Matlab®/Simulink®/HDL, which allows building models in different description styles, complexity and implementation levels. It provides a flexible platform for experimenting on the influence of the number of neurons and interconnections, in the precision of the results and in performance evaluation. The experimentation with different architecture configurations may help both in better understanding how neural circuits may work in the brain as well as in how speech processing can benefit from this understanding.

  6. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  7. Rendering of 3D-wavelet-compressed concentric mosaic scenery with progressive inverse wavelet synthesis (PIWS)

    NASA Astrophysics Data System (ADS)

    Wu, Yunnan; Luo, Lin; Li, Jin; Zhang, Ya-Qin

    2000-05-01

    The concentric mosaics offer a quick solution to the construction and navigation of a virtual environment. To reduce the vast data amount of the concentric mosaics, a compression scheme based on 3D wavelet transform has been proposed in a previous paper. In this work, we investigate the efficient implementation of the renderer. It is preferable not to expand the compressed bitstream as a whole, so that the memory consumption of the renderer can be reduced. Instead, only the data necessary to render the current view are accessed and decoded. The progressive inverse wavelet synthesis (PIWS) algorithm is proposed to provide the random data access and to reduce the calculation for the data access requests to a minimum. A mixed cache is used in PIWS, where the entropy decoded wavelet coefficient, intermediate result of lifting and fully synthesized pixel are all stored at the same memory unit because of the in- place calculation property of the lifting implementation. PIWS operates with a finite state machine, where each memory unit is attached with a state to indicate what type of content is currently stored. The computational saving achieved by PIWS is demonstrated with extensive experiment results.

  8. A Novel Implementation of Efficient Algorithms for Quantum Circuit Synthesis

    NASA Astrophysics Data System (ADS)

    Zeller, Luke

    In this project, we design and develop a computer program to effectively approximate arbitrary quantum gates using the discrete set of Clifford Gates together with the T gate (π/8 gate). Employing recent results from Mosca et. al. and Giles and Selinger, we implement a decomposition scheme that outputs a sequence of Clifford, T, and Tt gates that approximate the input to within a specified error range ɛ. Specifically, the given gate is first rounded to an element of Z[1/2, i] with a precision determined by ɛ, and then exact synthesis is employed to produce the resulting gate. It is known that this procedure is optimal in approximating an arbitrary single qubit gate. Our program, written in Matlab and Python, can complete both approximate and exact synthesis of qubits. It can be used to assist in the experimental implementation of an arbitrary fault-tolerant single qubit gate, for which direct implementation isn't feasible.

  9. A Dynamic Compressive Gammachirp Auditory Filterbank

    PubMed Central

    Irino, Toshio; Patterson, Roy D.

    2008-01-01

    It is now common to use knowledge about human auditory processing in the development of audio signal processors. Until recently, however, such systems were limited by their linearity. The auditory filter system is known to be level-dependent as evidenced by psychophysical data on masking, compression, and two-tone suppression. However, there were no analysis/synthesis schemes with nonlinear filterbanks. This paper describe18300060s such a scheme based on the compressive gammachirp (cGC) auditory filter. It was developed to extend the gammatone filter concept to accommodate the changes in psychophysical filter shape that are observed to occur with changes in stimulus level in simultaneous, tone-in-noise masking. In models of simultaneous noise masking, the temporal dynamics of the filtering can be ignored. Analysis/synthesis systems, however, are intended for use with speech sounds where the glottal cycle can be long with respect to auditory time constants, and so they require specification of the temporal dynamics of auditory filter. In this paper, we describe a fast-acting level control circuit for the cGC filter and show how psychophysical data involving two-tone suppression and compression can be used to estimate the parameter values for this dynamic version of the cGC filter (referred to as the “dcGC” filter). One important advantage of analysis/synthesis systems with a dcGC filterbank is that they can inherit previously refined signal processing algorithms developed with conventional short-time Fourier transforms (STFTs) and linear filterbanks. PMID:19330044

  10. Control algorithms and applications of the wavefront sensorless adaptive optics

    NASA Astrophysics Data System (ADS)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  11. Development and Validation of a Primary Care-Based Family Health History and Decision Support Program (MeTree)

    PubMed Central

    Orlando, Lori A.; Buchanan, Adam H.; Hahn, Susan E.; Christianson, Carol A.; Powell, Karen P.; Skinner, Celette Sugg; Chesnut, Blair; Blach, Colette; Due, Barbara; Ginsburg, Geoffrey S.; Henrich, Vincent C.

    2016-01-01

    INTRODUCTION Family health history is a strong predictor of disease risk. To reduce the morbidity and mortality of many chronic diseases, risk-stratified evidence-based guidelines strongly encourage the collection and synthesis of family health history to guide selection of primary prevention strategies. However, the collection and synthesis of such information is not well integrated into clinical practice. To address barriers to collection and use of family health histories, the Genomedical Connection developed and validated MeTree, a Web-based, patient-facing family health history collection and clinical decision support tool. MeTree is designed for integration into primary care practices as part of the genomic medicine model for primary care. METHODS We describe the guiding principles, operational characteristics, algorithm development, and coding used to develop MeTree. Validation was performed through stakeholder cognitive interviewing, a genetic counseling pilot program, and clinical practice pilot programs in 2 community-based primary care clinics. RESULTS Stakeholder feedback resulted in changes to MeTree’s interface and changes to the phrasing of clinical decision support documents. The pilot studies resulted in the identification and correction of coding errors and the reformatting of clinical decision support documents. MeTree’s strengths in comparison with other tools are its seamless integration into clinical practice and its provision of action-oriented recommendations guided by providers’ needs. LIMITATIONS The tool was validated in a small cohort. CONCLUSION MeTree can be integrated into primary care practices to help providers collect and synthesize family health history information from patients with the goal of improving adherence to risk-stratified evidence-based guidelines. PMID:24044145

  12. Integrating multisensor satellite data merging and image reconstruction in support of machine learning for better water quality management.

    PubMed

    Chang, Ni-Bin; Bai, Kaixu; Chen, Chi-Farn

    2017-10-01

    Monitoring water quality changes in lakes, reservoirs, estuaries, and coastal waters is critical in response to the needs for sustainable development. This study develops a remote sensing-based multiscale modeling system by integrating multi-sensor satellite data merging and image reconstruction algorithms in support of feature extraction with machine learning leading to automate continuous water quality monitoring in environmentally sensitive regions. This new Earth observation platform, termed "cross-mission data merging and image reconstruction with machine learning" (CDMIM), is capable of merging multiple satellite imageries to provide daily water quality monitoring through a series of image processing, enhancement, reconstruction, and data mining/machine learning techniques. Two existing key algorithms, including Spectral Information Adaptation and Synthesis Scheme (SIASS) and SMart Information Reconstruction (SMIR), are highlighted to support feature extraction and content-based mapping. Whereas SIASS can support various data merging efforts to merge images collected from cross-mission satellite sensors, SMIR can overcome data gaps by reconstructing the information of value-missing pixels due to impacts such as cloud obstruction. Practical implementation of CDMIM was assessed by predicting the water quality over seasons in terms of the concentrations of nutrients and chlorophyll-a, as well as water clarity in Lake Nicaragua, providing synergistic efforts to better monitor the aquatic environment and offer insightful lake watershed management strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  14. a Real-Time Computer Music Synthesis System

    NASA Astrophysics Data System (ADS)

    Lent, Keith Henry

    A real time sound synthesis system has been developed at the Computer Music Center of The University of Texas at Austin. This system consists of several stand alone processors that were constructed jointly with White Instruments in Austin. These processors can be programmed as general purpose computers, but are provided with a number of specialized interfaces including: MIDI, 8 bit parallel, high speed serial, 2 channels analog input (18 bit A/Ds, 48kHz sample rate), and 4 channels analog output (18 bit D/As). In addition, a basic music synthesis language (Music56000) has been written in assembly code. On top of this, a symbolic compiler (PatchWork) has been developed to enable algorithms which run in these processors to be created graphically. And finally, a number of efficient time domain numerical models have been developed to enable the construction, simulation, control, and synthesis of many musical acoustics systems in real time on these processors. Specifically, assembly language models for cylindrical and conical horn sections, dissipative losses, tone holes, bells, and a number of linear and nonlinear boundary conditions have been developed.

  15. Feasibility study of a synthesis procedure for array feeds to improve radiation performance of large distorted reflector antennas

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Smith, W. T.

    1990-01-01

    Surface errors on parabolic reflector antennas degrade the overall performance of the antenna. Space antenna structures are difficult to build, deploy and control. They must maintain a nearly perfect parabolic shape in a harsh environment and must be lightweight. Electromagnetic compensation for surface errors in large space reflector antennas can be used to supplement mechanical compensation. Electromagnetic compensation for surface errors in large space reflector antennas has been the topic of several research studies. Most of these studies try to correct the focal plane fields of the reflector near the focal point and, hence, compensate for the distortions over the whole radiation pattern. An alternative approach to electromagnetic compensation is presented. The proposed technique uses pattern synthesis to compensate for the surface errors. The pattern synthesis approach uses a localized algorithm in which pattern corrections are directed specifically towards portions of the pattern requiring improvement. The pattern synthesis technique does not require knowledge of the reflector surface. It uses radiation pattern data to perform the compensation.

  16. The chemical energy unit partial oxidation reactor operation simulation modeling

    NASA Astrophysics Data System (ADS)

    Mrakin, A. N.; Selivanov, A. A.; Batrakov, P. A.; Sotnikov, D. G.

    2018-01-01

    The chemical energy unit scheme for synthesis gas, electric and heat energy production which is possible to be used both for the chemical industry on-site facilities and under field conditions is represented in the paper. The partial oxidation reactor gasification process mathematical model is described and reaction products composition and temperature determining algorithm flow diagram is shown. The developed software product verification showed good convergence of the experimental values and calculations according to the other programmes: the temperature determining relative discrepancy amounted from 4 to 5 %, while the absolute composition discrepancy ranged from 1 to 3%. The synthesis gas composition was found out practically not to depend on the supplied into the partial oxidation reactor (POR) water vapour enthalpy and compressor air pressure increase ratio. Moreover, air consumption coefficient α increase from 0.7 to 0.9 was found out to decrease synthesis gas target components (carbon and hydrogen oxides) specific yield by nearly 2 times and synthesis gas target components required ratio was revealed to be seen in the water vapour specific consumption area (from 5 to 6 kg/kg of fuel).

  17. Fast synthesis of topographic mask effects based on rigorous solutions

    NASA Astrophysics Data System (ADS)

    Yan, Qiliang; Deng, Zhijie; Shiely, James

    2007-10-01

    Topographic mask effects can no longer be ignored at technology nodes of 45 nm, 32 nm and beyond. As feature sizes become comparable to the mask topographic dimensions and the exposure wavelength, the popular thin mask model breaks down, because the mask transmission no longer follows the layout. A reliable mask transmission function has to be derived from Maxwell equations. Unfortunately, rigorous solutions of Maxwell equations are only manageable for limited field sizes, but impractical for full-chip optical proximity corrections (OPC) due to the prohibitive runtime. Approximation algorithms are in demand to achieve a balance between acceptable computation time and tolerable errors. In this paper, a fast algorithm is proposed and demonstrated to model topographic mask effects for OPC applications. The ProGen Topographic Mask (POTOMAC) model synthesizes the mask transmission functions out of small-sized Maxwell solutions from a finite-difference-in-time-domain (FDTD) engine, an industry leading rigorous simulator of topographic mask effect from SOLID-E. The integral framework presents a seamless solution to the end user. Preliminary results indicate the overhead introduced by POTOMAC is contained within the same order of magnitude in comparison to the thin mask approach.

  18. Computational Tools and Algorithms for Designing Customized Synthetic Genes

    PubMed Central

    Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris

    2014-01-01

    Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050

  19. Methods of harmonic synthesis for global geopotential models and their first-, second- and third-order gradients

    NASA Astrophysics Data System (ADS)

    Fantino, E.; Casotto, S.

    2009-07-01

    Four widely used algorithms for the computation of the Earth’s gravitational potential and its first-, second- and third-order gradients are examined: the traditional increasing degree recursion in associated Legendre functions and its variant based on the Clenshaw summation, plus the methods of Pines and Cunningham-Metris, which are free from the singularities that distinguish the first two methods at the geographic poles. All four methods are reorganized with the lumped coefficients approach, which in the cases of Pines and Cunningham-Metris requires a complete revision of the algorithms. The characteristics of the four methods are studied and described, and numerical tests are performed to assess and compare their precision, accuracy, and efficiency. In general the performance levels of all four codes exhibit large improvements over previously published versions. From the point of view of numerical precision, away from the geographic poles Clenshaw and Legendre offer an overall better quality. Furthermore, Pines and Cunningham-Metris are affected by an intrinsic loss of precision at the equator and suffer from additional deterioration when the gravity gradients components are rotated into the East-North-Up topocentric reference system.

  20. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  1. Range discrimination in ultrasonic vibrometry: theory and experiment.

    PubMed

    Martin, J S; Rogers, P H; Gray, M D

    2011-09-01

    A technique has been developed to demodulate periodic broadband ultrasonic interrogation signals that are returned from multiple scattering sites to simultaneously determine the low-frequency displacement time histories of each individual site. The technique employs a broadband periodic transmit signal. The motions of scattering sites are separately determined from the echoed receive signal by an algorithm involving comb filtering and pulse synthesis. This algorithm permits spatial resolution comparable to pulse-echo techniques and displacement sensitivities comparable to pure-tone techniques. A system based on this technique was used to image transient audio-frequency displacements on the order of 1-10 μm peak (≥ 50 nm/√Hz) that were produced by propagating shear waves in a tissue phantom. The system used concentric transmitting and receiving transducers and a carrier signal centered at 2.5 MHz with an 800 kHz bandwidth. The system was self-noise-limited and capable of detecting motions of strongly reflecting regions on the order of 1 nm/√Hz. System performance is limited by several factors including signal selection, component hardware, and ultrasonic propagation within the media of interest. © 2011 Acoustical Society of America

  2. The Of?p stars of the Magellanic Clouds: Are they strongly magnetic?

    NASA Astrophysics Data System (ADS)

    Munoz, M.; Wade, G. A.; Nazé, Y.; Bagnulo, S.; Puls, J.

    2018-01-01

    All known Galactic Of?p stars have been shown to host strong, organized, magnetic fields. Recently, five Of?p stars have been discovered in the Magellanic Clouds. They posses photometric (Nazé et al., 2015) and spectroscopic (Walborn et al., 2015) variability compatible with the Oblique Rotator Model (ORM). However, their magnetic fields have yet to be directly detected. We have developed an algorithm allowing for the synthesis of photometric observables based on the Analytic Dynamical Magnetosphere (ADM) model by Owocki et al. (2016). We apply our model to OGLE photometry in order to constrain their magnetic geometries and surface dipole strengths. We predict that the field strengths for some of theses candidate extra-Galactic magnetic stars may be within the detection limits of the FORS2 instrument

  3. Quantitative gene expression analysis in Caenorhabditis elegans using single molecule RNA FISH.

    PubMed

    Bolková, Jitka; Lanctôt, Christian

    2016-04-01

    Advances in fluorescent probe design and synthesis have allowed the uniform in situ labeling of individual RNA molecules. In a technique referred to as single molecule RNA FISH (smRNA FISH), the labeled RNA molecules can be imaged as diffraction-limited spots and counted using image analysis algorithms. Single RNA counting has provided valuable insights into the process of gene regulation. This microscopy-based method has often revealed a high cell-to-cell variability in expression levels, which has in turn led to a growing interest in investigating the biological significance of gene expression noise. Here we describe the application of the smRNA FISH technique to samples of Caenorhabditis elegans, a well-characterized model organism. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Microfluidic Technology: Uncovering the Mechanisms of Nanocrystal Nucleation and Growth.

    PubMed

    Lignos, Ioannis; Maceiczyk, Richard; deMello, Andrew J

    2017-05-16

    The controlled and reproducible formation of colloidal semiconductor nanocrystals (or quantum dots) is of central importance in nanoscale science and technology. The tunable size- and shape-dependent properties of such materials make them ideal candidates for the development of efficient and low-cost displays, solar cells, light-emitting devices, and catalysts. The formidable difficulties associated with the macroscale preparation of semiconductor nanocrystals (possessing bespoke optical and chemical properties) result from the fact that underlying reaction mechanisms are complex and that the reactive environment is difficult to control. Automated microfluidic reactors coupled with monitoring systems and optimization algorithms aim to elucidate complex reaction mechanisms that govern both nucleation and growth of nanocrystals. Such platforms are ideally suited for the efficient optimization of reaction parameters, assuring the reproducible synthesis of nanocrystals with user-defined properties. This Account aims to inform the nanomaterials community about how microfluidic technologies can supplement flask experimentation for the ensemble investigation of formation mechanisms and design of semiconductor nanocrystals. We present selected studies outlining the preparation of quantum dots using microfluidic systems with integrated analytics. Such microfluidic reaction systems leverage the ability to extract real-time information regarding optical, structural, and compositional characteristics of quantum dots during nucleation and growth stages. The Account further highlights our recent research activities focused on the development and application of droplet-based microfluidics with integrated optical detection systems for the efficient and rapid screening of reaction conditions and a better understanding of the mechanisms of quantum dot synthesis. We describe the features and operation of fully automated microfluidic reactors and their subsequent application to high-throughput parametric screening of metal chalcogenides (CdSe, PbS, PbSe, CdSeTe), ternary and core/shell heavy metal-free quantum dots (CuInS 2 , CuInS 2 /ZnS), and all-inorganic perovskite nanocrystals (CsPbX 3 , X = Cl, Br, I) syntheses. Critically, concurrent absorption and photoluminescence measurements on millisecond to second time scales allow the extraction of basic parameters governing nanocrystal formation. Moreover, experimental data obtained from such microfluidic platforms can be directly supported by theoretical models of nucleation and growth. To this end, we also describe the use of metamodeling algorithms able to accurately predict optimized conditions of CdSe synthesis using a minimal number of sample parameters. Importantly, we discuss future challenges that must be addressed before microfluidic technologies are in a position to be widely adopted for the on-demand formation of nanocrystals. From a technology perspective, these challenges include the development of novel engineering platforms for the formation of complex architectures, the integration of monitoring systems able to harvest photophysical and structural information, the incorporation of continuous purification systems, and the application of optimization algorithms to multicomponent quantum dot systems.

  5. F100 Multivariable Control Synthesis Program. Computer Implementation of the F100 Multivariable Control Algorithm

    NASA Technical Reports Server (NTRS)

    Soeder, J. F.

    1983-01-01

    As turbofan engines become more complex, the development of controls necessitate the use of multivariable control techniques. A control developed for the F100-PW-100(3) turbofan engine by using linear quadratic regulator theory and other modern multivariable control synthesis techniques is described. The assembly language implementation of this control on an SEL 810B minicomputer is described. This implementation was then evaluated by using a real-time hybrid simulation of the engine. The control software was modified to run with a real engine. These modifications, in the form of sensor and actuator failure checks and control executive sequencing, are discussed. Finally recommendations for control software implementations are presented.

  6. A prototype computerized synthesis methodology for generic space access vehicle (SAV) conceptual design

    NASA Astrophysics Data System (ADS)

    Huang, Xiao

    2006-04-01

    Today's and especially tomorrow's competitive launch vehicle design environment requires the development of a dedicated generic Space Access Vehicle (SAV) design methodology. A total of 115 industrial, research, and academic aircraft, helicopter, missile, and launch vehicle design synthesis methodologies have been evaluated. As the survey indicates, each synthesis methodology tends to focus on a specific flight vehicle configuration, thus precluding the key capability to systematically compare flight vehicle design alternatives. The aim of the research investigation is to provide decision-making bodies and the practicing engineer a design process and tool box for robust modeling and simulation of flight vehicles where the ultimate performance characteristics may hinge on numerical subtleties. This will enable the designer of a SAV for the first time to consistently compare different classes of SAV configurations on an impartial basis. This dissertation presents the development steps required towards a generic (configuration independent) hands-on flight vehicle conceptual design synthesis methodology. This process is developed such that it can be applied to any flight vehicle class if desired. In the present context, the methodology has been put into operation for the conceptual design of a tourist Space Access Vehicle. The case study illustrates elements of the design methodology & algorithm for the class of Horizontal Takeoff and Horizontal Landing (HTHL) SAVs. The HTHL SAV design application clearly outlines how the conceptual design process can be centrally organized, executed and documented with focus on design transparency, physical understanding and the capability to reproduce results. This approach offers the project lead and creative design team a management process and tool which iteratively refines the individual design logic chosen, leading to mature design methods and algorithms. As illustrated, the HTHL SAV hands-on design methodology offers growth potential in that the same methodology can be continually updated and extended to other SAV configuration concepts, such as the Vertical Takeoff and Vertical Landing (VTVL) SAV class. Having developed, validated and calibrated the methodology for HTHL designs in the 'hands-on' mode, the report provides an outlook how the methodology will be integrated into a prototype computerized design synthesis software AVDS-PrADOSAV in a follow-on step.

  7. Facial Age Synthesis Using Sparse Partial Least Squares (The Case of Ben Needham).

    PubMed

    Bukar, Ali M; Ugail, Hassan

    2017-09-01

    Automatic facial age progression (AFAP) has been an active area of research in recent years. This is due to its numerous applications which include searching for missing. This study presents a new method of AFAP. Here, we use an active appearance model (AAM) to extract facial features from available images. An aging function is then modelled using sparse partial least squares regression (sPLS). Thereafter, the aging function is used to render new faces at different ages. To test the accuracy of our algorithm, extensive evaluation is conducted using a database of 500 face images with known ages. Furthermore, the algorithm is used to progress Ben Needham's facial image that was taken when he was 21 months old to the ages of 6, 14, and 22 years. The algorithm presented in this study could potentially be used to enhance the search for missing people worldwide. © 2017 American Academy of Forensic Sciences.

  8. Exploration of depth modeling mode one lossless wedgelets storage strategies for 3D-high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan

    2018-01-01

    The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.

  9. Three list scheduling temporal partitioning algorithm of time space characteristic analysis and compare for dynamic reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Chen, Naijin

    2013-03-01

    Level Based Partitioning (LBP) algorithm, Cluster Based Partitioning (CBP) algorithm and Enhance Static List (ESL) temporal partitioning algorithm based on adjacent matrix and adjacent table are designed and implemented in this paper. Also partitioning time and memory occupation based on three algorithms are compared. Experiment results show LBP partitioning algorithm possesses the least partitioning time and better parallel character, as far as memory occupation and partitioning time are concerned, algorithms based on adjacent table have less partitioning time and less space memory occupation.

  10. CARA: Cognitive Architecture for Reasoning About Adversaries

    DTIC Science & Technology

    2012-01-20

    synthesis approach taken here the KIDS principle (Keep It Descriptive, Stupid ) applies, and agents and organizations are profiled in great detail...developed two algorithms to make forecasts about adversarial behavior. We developed game-theoretical approaches to reason about group behavior. We...to automatically make forecasts about group behavior together with methods to quantify the uncertainty inherent in such forecasts; • Developed

  11. Design of radar receivers

    NASA Astrophysics Data System (ADS)

    Sokolov, M. A.

    This handbook treats the design and analysis of of pulsed radar receivers, with emphasis on elements (especially IC elements) that implement optimal and suboptimal algorithms. The design methodology is developed from the viewpoint of statistical communications theory. Particular consideration is given to the synthesis of single-channel and multichannel detectors, the design of analog and digital signal-processing devices, and the analysis of IF amplifiers.

  12. Controlling the Internal Heat Transfer Coefficient by the Characteristics of External Flows

    NASA Astrophysics Data System (ADS)

    Zhuromskii, V. M.

    2018-01-01

    The engineering-physical fundamentals of substance synthesis in a boiling apparatus are presented. We have modeled a system of automatic stabilization of the maximum internal heat transfer coefficient in such an apparatus by the characteristics of external flows on the basis of adaptive seeking algorithms. The results of operation of the system in the shop are presented.

  13. Automated Synthesis of Architecture of Avionic Systems

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Xu, Joseph; Dang, Van; Lu, James F.

    2006-01-01

    The Architecture Synthesis Tool (AST) is software that automatically synthesizes software and hardware architectures of avionic systems. The AST is expected to be most helpful during initial formulation of an avionic-system design, when system requirements change frequently and manual modification of architecture is time-consuming and susceptible to error. The AST comprises two parts: (1) an architecture generator, which utilizes a genetic algorithm to create a multitude of architectures; and (2) a functionality evaluator, which analyzes the architectures for viability, rejecting most of the non-viable ones. The functionality evaluator generates and uses a viability tree a hierarchy representing functions and components that perform the functions such that the system as a whole performs system-level functions representing the requirements for the system as specified by a user. Architectures that survive the functionality evaluator are further evaluated by the selection process of the genetic algorithm. Architectures found to be most promising to satisfy the user s requirements and to perform optimally are selected as parents to the next generation of architectures. The foregoing process is iterated as many times as the user desires. The final output is one or a few viable architectures that satisfy the user s requirements.

  14. Evaluation of Demons- and FEM-Based Registration Algorithms for Lung Cancer.

    PubMed

    Yang, Juan; Li, Dengwang; Yin, Yong; Zhao, Fen; Wang, Hongjun

    2016-04-01

    We evaluated and compared the accuracy of 2 deformable image registration algorithms in 4-dimensional computed tomography images for patients with lung cancer. Ten patients with non-small cell lung cancer or small cell lung cancer were enrolled in this institutional review board-approved study. The displacement vector fields relative to a specific reference image were calculated by using the diffeomorphic demons (DD) algorithm and the finite element method (FEM)-based algorithm. The registration accuracy was evaluated by using normalized mutual information (NMI), the sum of squared intensity difference (SSD), modified Hausdorff distance (dH_M), and ratio of gross tumor volume (rGTV) difference between reference image and deformed phase image. We also compared the registration speed of the 2 algorithms. Of all patients, the FEM-based algorithm showed stronger ability in aligning 2 images than the DD algorithm. The means (±standard deviation) of NMI were 0.86 (±0.05) and 0.90 (±0.05) using the DD algorithm and the FEM-based algorithm, respectively. The means of SSD were 0.006 (±0.003) and 0.003 (±0.002) using the DD algorithm and the FEM-based algorithm, respectively. The means of dH_M were 0.04 (±0.02) and 0.03 (±0.03) using the DD algorithm and the FEM-based algorithm, respectively. The means of rGTV were 3.9% (±1.01%) and 2.9% (±1.1%) using the DD algorithm and the FEM-based algorithm, respectively. However, the FEM-based algorithm costs a longer time than the DD algorithm, with the average running time of 31.4 minutes compared to 21.9 minutes for all patients. The preliminary results showed that the FEM-based algorithm was more accurate than the DD algorithm while compromised with the registration speed. © The Author(s) 2015.

  15. SPHINX--an algorithm for taxonomic binning of metagenomic sequences.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S

    2011-01-01

    Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.

  16. Understanding radio polarimetry. V. Making matrix self-calibration work: processing of a simulated observation

    NASA Astrophysics Data System (ADS)

    Hamaker, J. P.

    2006-09-01

    Context: .This is Paper V in a series on polarimetric aperture synthesis based on the algebra of 2×2 matrices. Aims: .It validates the matrix self-calibration theory of the preceding Paper IV and outlines the algorithmic methods that had to be developed for its application. Methods: .New avenues of polarimetric self-calibration opened up in Paper IV are explored by processing a simulated observation. To focus on the polarimetric issues, it is set up so as to sidestep some of the common complications of aperture synthesis, yet properly represent physical conditions. In addition to a representative collection of observing errors, the simulated instrument includes strongly varying Faraday rotation and antennas with unequal feeds. The selfcal procedure is described in detail, including aspects in which it differs from the scalar case, and its effects are demonstrated with a number of intermediate image results. Results: .The simulation's outcome is in full agreement with the theory. The nonlinear matrix equations for instrumental parameters are readily solved by iteration; a convergence problem is easily remedied with a new ancillary algorithm. Instrumental effects are cleanly separated from source properties without reference to changes in parallactic rotation during the observation. Polarimetric images of high purity and dynamic range result. As theory predicts, polarimetric errors that are common to all sources inevitably remain; prior knowledge of the statistics of linear and circular polarization in a typical observed field can be applied to eliminate most of them. Conclusions: .The paper conclusively demonstrates that matrix selfcal per se is a viable method that may foster substantial advancement in the art of radio polarimetry. For its application in real observations, a number of issues must be resolved that matrix selfcal has in common with its scalar sibling, such as the treatment of extended sources and the familiar sampling and aliasing problems. The close analogy between scalar interferometry and its matrix-based generalisation suggests that one may apply well-developed methods of scalar interferometry. Marrying these methods to those of this paper will require a significant investment in new software. Two such developments are known to be foreseen or underway.

  17. Network design and analysis for multi-enzyme biocatalysis.

    PubMed

    Blaß, Lisa Katharina; Weyler, Christian; Heinzle, Elmar

    2017-08-10

    As more and more biological reaction data become available, the full exploration of the enzymatic potential for the synthesis of valuable products opens up exciting new opportunities but is becoming increasingly complex. The manual design of multi-step biosynthesis routes involving enzymes from different organisms is very challenging. To harness the full enzymatic potential, we developed a computational tool for the directed design of biosynthetic production pathways for multi-step catalysis with in vitro enzyme cascades, cell hydrolysates and permeabilized cells. We present a method which encompasses the reconstruction of a genome-scale pan-organism metabolic network, path-finding and the ranking of the resulting pathway candidates for proposing suitable synthesis pathways. The network is based on reaction and reaction pair data from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and the thermodynamics calculator eQuilibrator. The pan-organism network is especially useful for finding the most suitable pathway to a target metabolite from a thermodynamic or economic standpoint. However, our method can be used with any network reconstruction, e.g. for a specific organism. We implemented a path-finding algorithm based on a mixed-integer linear program (MILP) which takes into account both topology and stoichiometry of the underlying network. Unlike other methods we do not specify a single starting metabolite, but our algorithm searches for pathways starting from arbitrary start metabolites to a target product of interest. Using a set of biochemical ranking criteria including pathway length, thermodynamics and other biological characteristics such as number of heterologous enzymes or cofactor requirement, it is possible to obtain well-designed meaningful pathway alternatives. In addition, a thermodynamic profile, the overall reactant balance and potential side reactions as well as an SBML file for visualization are generated for each pathway alternative. We present an in silico tool for the design of multi-enzyme biosynthetic production pathways starting from a pan-organism network. The method is highly customizable and each module can be adapted to the focus of the project at hand. This method is directly applicable for (i) in vitro enzyme cascades, (ii) cell hydrolysates and (iii) permeabilized cells.

  18. On an algorithmic definition for the components of the minimal cell.

    PubMed

    Martínez, Octavio; Reyes-Valdés, M Humberto

    2018-01-01

    Living cells are highly complex systems comprising a multitude of elements that are engaged in the many convoluted processes observed during the cell cycle. However, not all elements and processes are essential for cell survival and reproduction under steady-state environmental conditions. To distinguish between essential from expendable cell components and thus define the 'minimal cell' and the corresponding 'minimal genome', we postulate that the synthesis of all cell elements can be represented as a finite set of binary operators, and within this framework we show that cell elements that depend on their previous existence to be synthesized are those that are essential for cell survival. An algorithm to distinguish essential cell elements is presented and demonstrated within an interactome. Data and functions implementing the algorithm are given as supporting information. We expect that this algorithmic approach will lead to the determination of the complete interactome of the minimal cell, which could then be experimentally validated. The assumptions behind this hypothesis as well as its consequences for experimental and theoretical biology are discussed.

  19. Noise analysis of genome-scale protein synthesis using a discrete computational model of translation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Racle, Julien; Hatzimanikatis, Vassily, E-mail: vassily.hatzimanikatis@epfl.ch; Swiss Institute of Bioinformatics

    2015-07-28

    Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as howmore » mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.« less

  20. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees

    PubMed Central

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-01-01

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597

  1. Glucose Synthesis in a Protein-Based Artificial Photosynthesis System.

    PubMed

    Lu, Hao; Yuan, Wenqiao; Zhou, Jack; Chong, Parkson Lee-Gau

    2015-09-01

    The objective of this study was to understand glucose synthesis of a protein-based artificial photosynthesis system affected by operating conditions, including the concentrations of reactants, reaction temperature, and illumination. Results from non-vesicle-based glyceraldehyde-3-phosphate (GAP) and glucose synthesis showed that the initial concentrations of ribulose-1,5-bisphosphate (RuBP) and adenosine triphosphate (ATP), lighting source, and temperature significantly affected glucose synthesis. Higher initial concentrations of RuBP and ATP significantly enhanced GAP synthesis, which was linearly correlated to glucose synthesis, confirming the proper functions of all catalyzing enzymes in the system. White fluorescent light inhibited artificial photosynthesis and reduced glucose synthesis by 79.2 % compared to in the dark. The reaction temperature of 40 °C was optimum, whereas lower or higher temperature reduced glucose synthesis. Glucose synthesis in the vesicle-based artificial photosynthesis system reconstituted with bacteriorhodopsin, F 0 F 1 ATP synthase, and polydimethylsiloxane-methyloxazoline-polydimethylsiloxane triblock copolymer was successfully demonstrated. This system efficiently utilized light-induced ATP to drive glucose synthesis, and 5.2 μg ml(-1) glucose was synthesized in 0.78-ml reaction buffer in 7 h. Light-dependent reactions were found to be the bottleneck of the studied artificial photosynthesis system.

  2. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    NASA Astrophysics Data System (ADS)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  3. Efficient Lookup Table-Based Adaptive Baseband Predistortion Architecture for Memoryless Nonlinearity

    NASA Astrophysics Data System (ADS)

    Ba, Seydou N.; Waheed, Khurram; Zhou, G. Tong

    2010-12-01

    Digital predistortion is an effective means to compensate for the nonlinear effects of a memoryless system. In case of a cellular transmitter, a digital baseband predistorter can mitigate the undesirable nonlinear effects along the signal chain, particularly the nonlinear impairments in the radiofrequency (RF) amplifiers. To be practically feasible, the implementation complexity of the predistorter must be minimized so that it becomes a cost-effective solution for the resource-limited wireless handset. This paper proposes optimizations that facilitate the design of a low-cost high-performance adaptive digital baseband predistorter for memoryless systems. A comparative performance analysis of the amplitude and power lookup table (LUT) indexing schemes is presented. An optimized low-complexity amplitude approximation and its hardware synthesis results are also studied. An efficient LUT predistorter training algorithm that combines the fast convergence speed of the normalized least mean squares (NLMSs) with a small hardware footprint is proposed. Results of fixed-point simulations based on the measured nonlinear characteristics of an RF amplifier are presented.

  4. Antenna pattern control using impedance surfaces

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Liu, Kefeng

    1992-01-01

    During this research period, we have effectively transferred existing computer codes from CRAY supercomputer to work station based systems. The work station based version of our code preserved the accuracy of the numerical computations while giving a much better turn-around time than the CRAY supercomputer. Such a task relieved us of the heavy dependence of the supercomputer account budget and made codes developed in this research project more feasible for applications. The analysis of pyramidal horns with impedance surfaces was our major focus during this research period. Three different modeling algorithms in analyzing lossy impedance surfaces were investigated and compared with measured data. Through this investigation, we discovered that a hybrid Fourier transform technique, which uses the eigen mode in the stepped waveguide section and the Fourier transformed field distributions across the stepped discontinuities for lossy impedances coating, gives a better accuracy in analyzing lossy coatings. After a further refinement of the present technique, we will perform an accurate radiation pattern synthesis in the coming reporting period.

  5. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    NASA Astrophysics Data System (ADS)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  6. Synthesis of a mixed-valent tin nitride and considerations of its possible crystal structures

    DOE PAGES

    Caskey, Christopher M.; Holder, Aaron; Shulda, Sarah; ...

    2016-04-12

    Recent advances in theoretical structure prediction methods and high-throughput computational techniques are revolutionizing experimental discovery of the thermodynamically stable inorganic materials. Metastable materials represent a new frontier for these studies, since even simple binary non-ground state compounds of common elements may be awaiting discovery. However, there are significant research challenges related to non-equilibrium thin film synthesis and crystal structure predictions, such as small strained crystals in the experimental samples and energy minimization based theoretical algorithms. Here, we report on experimental synthesis and characterization, as well as theoretical first-principles calculations of a previously unreported mixed-valent binary tin nitride. Thin film experimentsmore » indicate that this novel material is N-deficient SnN with tin in the mixed ii/iv valence state and a small low-symmetry unit cell. Theoretical calculations suggest that the most likely crystal structure has the space group 2 (SG2) related to the distorted delafossite (SG166), which is nearly 0.1 eV/atom above the ground state SnN polymorph. Furthermore, this observation is rationalized by the structural similarity of the SnN distorted delafossite to the chemically related Sn 3N 4 spinel compound, which provides a fresh scientific insight into the reasons for growth of polymorphs of metastable materials. In addition to reporting on the discovery of the simple binary SnN compound, this paper illustrates a possible way of combining a wide range of advanced characterization techniques with the first-principle property calculation methods, to elucidate the most likely crystal structure of the previously unreported metastable materials.« less

  7. Automatic Synthesis of UML Designs from Requirements in an Iterative Process

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Whittle, Jon; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".

  8. Synthesis of a mixed-valent tin nitride and considerations of its possible crystal structures

    NASA Astrophysics Data System (ADS)

    Caskey, Christopher M.; Holder, Aaron; Shulda, Sarah; Christensen, Steven T.; Diercks, David; Schwartz, Craig P.; Biagioni, David; Nordlund, Dennis; Kukliansky, Alon; Natan, Amir; Prendergast, David; Orvananos, Bernardo; Sun, Wenhao; Zhang, Xiuwen; Ceder, Gerbrand; Ginley, David S.; Tumas, William; Perkins, John D.; Stevanovic, Vladan; Pylypenko, Svitlana; Lany, Stephan; Richards, Ryan M.; Zakutayev, Andriy

    2016-04-01

    Recent advances in theoretical structure prediction methods and high-throughput computational techniques are revolutionizing experimental discovery of the thermodynamically stable inorganic materials. Metastable materials represent a new frontier for these studies, since even simple binary non-ground state compounds of common elements may be awaiting discovery. However, there are significant research challenges related to non-equilibrium thin film synthesis and crystal structure predictions, such as small strained crystals in the experimental samples and energy minimization based theoretical algorithms. Here, we report on experimental synthesis and characterization, as well as theoretical first-principles calculations of a previously unreported mixed-valent binary tin nitride. Thin film experiments indicate that this novel material is N-deficient SnN with tin in the mixed ii/iv valence state and a small low-symmetry unit cell. Theoretical calculations suggest that the most likely crystal structure has the space group 2 (SG2) related to the distorted delafossite (SG166), which is nearly 0.1 eV/atom above the ground state SnN polymorph. This observation is rationalized by the structural similarity of the SnN distorted delafossite to the chemically related Sn3N4 spinel compound, which provides a fresh scientific insight into the reasons for growth of polymorphs of metastable materials. In addition to reporting on the discovery of the simple binary SnN compound, this paper illustrates a possible way of combining a wide range of advanced characterization techniques with the first-principle property calculation methods, to elucidate the most likely crystal structure of the previously unreported metastable materials.

  9. Synthesis of a mixed-valent tin nitride and considerations of its possible crystal structures.

    PubMed

    Caskey, Christopher M; Holder, Aaron; Shulda, Sarah; Christensen, Steven T; Diercks, David; Schwartz, Craig P; Biagioni, David; Nordlund, Dennis; Kukliansky, Alon; Natan, Amir; Prendergast, David; Orvananos, Bernardo; Sun, Wenhao; Zhang, Xiuwen; Ceder, Gerbrand; Ginley, David S; Tumas, William; Perkins, John D; Stevanovic, Vladan; Pylypenko, Svitlana; Lany, Stephan; Richards, Ryan M; Zakutayev, Andriy

    2016-04-14

    Recent advances in theoretical structure prediction methods and high-throughput computational techniques are revolutionizing experimental discovery of the thermodynamically stable inorganic materials. Metastable materials represent a new frontier for these studies, since even simple binary non-ground state compounds of common elements may be awaiting discovery. However, there are significant research challenges related to non-equilibrium thin film synthesis and crystal structure predictions, such as small strained crystals in the experimental samples and energy minimization based theoretical algorithms. Here, we report on experimental synthesis and characterization, as well as theoretical first-principles calculations of a previously unreported mixed-valent binary tin nitride. Thin film experiments indicate that this novel material is N-deficient SnN with tin in the mixed ii/iv valence state and a small low-symmetry unit cell. Theoretical calculations suggest that the most likely crystal structure has the space group 2 (SG2) related to the distorted delafossite (SG166), which is nearly 0.1 eV/atom above the ground state SnN polymorph. This observation is rationalized by the structural similarity of the SnN distorted delafossite to the chemically related Sn3N4 spinel compound, which provides a fresh scientific insight into the reasons for growth of polymorphs of metastable materials. In addition to reporting on the discovery of the simple binary SnN compound, this paper illustrates a possible way of combining a wide range of advanced characterization techniques with the first-principle property calculation methods, to elucidate the most likely crystal structure of the previously unreported metastable materials.

  10. Synthesis of a mixed-valent tin nitride and considerations of its possible crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caskey, Christopher M.; Colorado School of Mines, Golden, Colorado 80401; Larix Chemical Science, Golden, Colorado 80401

    2016-04-14

    Recent advances in theoretical structure prediction methods and high-throughput computational techniques are revolutionizing experimental discovery of the thermodynamically stable inorganic materials. Metastable materials represent a new frontier for these studies, since even simple binary non-ground state compounds of common elements may be awaiting discovery. However, there are significant research challenges related to non-equilibrium thin film synthesis and crystal structure predictions, such as small strained crystals in the experimental samples and energy minimization based theoretical algorithms. Here, we report on experimental synthesis and characterization, as well as theoretical first-principles calculations of a previously unreported mixed-valent binary tin nitride. Thin film experimentsmore » indicate that this novel material is N-deficient SnN with tin in the mixed II/IV valence state and a small low-symmetry unit cell. Theoretical calculations suggest that the most likely crystal structure has the space group 2 (SG2) related to the distorted delafossite (SG166), which is nearly 0.1 eV/atom above the ground state SnN polymorph. This observation is rationalized by the structural similarity of the SnN distorted delafossite to the chemically related Sn{sub 3}N{sub 4} spinel compound, which provides a fresh scientific insight into the reasons for growth of polymorphs of metastable materials. In addition to reporting on the discovery of the simple binary SnN compound, this paper illustrates a possible way of combining a wide range of advanced characterization techniques with the first-principle property calculation methods, to elucidate the most likely crystal structure of the previously unreported metastable materials.« less

  11. Modal analysis of a nonuniform string with end mass and variable tension

    NASA Technical Reports Server (NTRS)

    Rheinfurth, M. H.; Galaboff, Z. J.

    1983-01-01

    Modal synthesis techniques for dynamic systems containing strings describe the lateral displacements of these strings by properly chosen shape functions. An iterative algorithm is provided to calculate the natural modes of a nonuniform string and variable tension for some typical boundary conditions including one end mass. Numerical examples are given for a string in a constant and a gravity gradient force field.

  12. Automated Design Tools for Integrated Mixed-Signal Microsystems (NeoCAD)

    DTIC Science & Technology

    2005-02-01

    method, Model Order Reduction (MOR) tools, system-level, mixed-signal circuit synthesis and optimization tools, and parsitic extraction tools. A unique...Mission Area: Command and Control mixed signal circuit simulation parasitic extraction time-domain simulation IC design flow model order reduction... Extraction 1.2 Overall Program Milestones CHAPTER 2 FAST TIME DOMAIN MIXED-SIGNAL CIRCUIT SIMULATION 2.1 HAARSPICE Algorithms 2.1.1 Mathematical Background

  13. THRESHOLD LOGIC.

    DTIC Science & Technology

    synthesis procedures; a ’best’ method is definitely established. (2) ’Symmetry Types for Threshold Logic’ is a tutorial expositon including a careful...development of the Goto-Takahasi self-dual type ideas. (3) ’Best Threshold Gate Decisions’ reports a comparison, on the 2470 7-argument threshold ...interpretation is shown best. (4) ’ Threshold Gate Networks’ reviews the previously discussed 2-algorithm in geometric terms, describes our FORTRAN

  14. Improved inter-layer prediction for light field content coding with display scalability

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo

    2016-09-01

    Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.

  15. Performance of Machine Learning Algorithms for Qualitative and Quantitative Prediction Drug Blockade of hERG1 channel.

    PubMed

    Wacker, Soren; Noskov, Sergei Yu

    2018-05-01

    Drug-induced abnormal heart rhythm known as Torsades de Pointes (TdP) is a potential lethal ventricular tachycardia found in many patients. Even newly released anti-arrhythmic drugs, like ivabradine with HCN channel as a primary target, block the hERG potassium current in overlapping concentration interval. Promiscuous drug block to hERG channel may potentially lead to perturbation of the action potential duration (APD) and TdP, especially when with combined with polypharmacy and/or electrolyte disturbances. The example of novel anti-arrhythmic ivabradine illustrates clinically important and ongoing deficit in drug design and warrants for better screening methods. There is an urgent need to develop new approaches for rapid and accurate assessment of how drugs with complex interactions and multiple subcellular targets can predispose or protect from drug-induced TdP. One of the unexpected outcomes of compulsory hERG screening implemented in USA and European Union resulted in large datasets of IC 50 values for various molecules entering the market. The abundant data allows now to construct predictive machine-learning (ML) models. Novel ML algorithms and techniques promise better accuracy in determining IC 50 values of hERG blockade that is comparable or surpassing that of the earlier QSAR or molecular modeling technique. To test the performance of modern ML techniques, we have developed a computational platform integrating various workflows for quantitative structure activity relationship (QSAR) models using data from the ChEMBL database. To establish predictive powers of ML-based algorithms we computed IC 50 values for large dataset of molecules and compared it to automated patch clamp system for a large dataset of hERG blocking and non-blocking drugs, an industry gold standard in studies of cardiotoxicity. The optimal protocol with high sensitivity and predictive power is based on the novel eXtreme gradient boosting (XGBoost) algorithm. The ML-platform with XGBoost displays excellent performance with a coefficient of determination of up to R 2 ~0.8 for pIC 50 values in evaluation datasets, surpassing other metrics and approaches available in literature. Ultimately, the ML-based platform developed in our work is a scalable framework with automation potential to interact with other developing technologies in cardiotoxicity field, including high-throughput electrophysiology measurements delivering large datasets of profiled drugs, rapid synthesis and drug development via progress in synthetic biology.

  16. Gene Expression Network Reconstruction by Convex Feature Selection when Incorporating Genetic Perturbations

    PubMed Central

    Logsdon, Benjamin A.; Mezey, Jason

    2010-01-01

    Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL), which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1), and genes involved in endocytosis (RCY1), the spindle checkpoint (BUB2), sulfonate catabolism (JLP1), and cell-cell communication (PRM7). Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data. PMID:21152011

  17. Metric-wave and decimetric-wave aperture synthesis systems (review)

    NASA Astrophysics Data System (ADS)

    Ilyasov, Y. P.

    1984-09-01

    Aperture synthesis systems using metric or decimetric waves are adequate and promising for astrophysical study of extragalactic radioemission sources, operation with metric waves being characterized by destabilizing effects of the ionosphere and thus requiring special methods of data processing. Methods of closure phase and closure amplitude were proposed and then successfully implemented in very-large-baseline radiotelescope and multi-element interferometers, respectively. Several radiotelescopes were developed which operate in the supersynthesis mode, with rotation of the Earth used for filling the space-frequency plane. Further achievements include the Swarup system (Uti/INDIA) with phase-stable interferometer, the Jodrell Bank system (Manchester/UK), the Palmer Merlin multielement system (UK) with CLEAN procedure and CORTEL telescope correction algorithm, the VLA system (USA), and the international giant equatorial radiotelescope.

  18. Extensions to PIFCGT: Multirate output feedback and optimal disturbance suppression

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.

    1986-01-01

    New control synthesis procedures for digital flight control systems were developed. The theoretical developments are the solution to the problem of optimal disturbance suppression in the presence of windshear. Control synthesis is accomplished using a linear quadratic cost function, the command generator tracker for trajectory following and the proportional-integral-filter control structure for practical implementation. Extensions are made to the optimal output feedback algorithm for computing feedback gains so that the multirate and optimal disturbance control designs are computed and compared for the advanced transport operating system (ATOPS). The performance of the designs is demonstrated by closed-loop poles, frequency domain multiinput sigma and eigenvalue plots and detailed nonlinear 6-DOF aircraft simulations in the terminal area in the presence of windshear.

  19. The Philosophical Basis of Bioethics.

    PubMed

    Horn, Peter

    2015-09-01

    In this article, I consider in what sense bioethics is philosophical. Philosophy includes both analysis and synthesis. Analysis focuses on central concepts in a domain, for example, informed consent, death, medical futility, and health. It is argued that analysis should avoid oversimplification. The synthesis or synoptic dimension prompts people to explain how their views have logical assumptions and implications. In addition to the conceptual elements are the evaluative and empirical dimensions. Among its functions, philosophy can be a form of prophylaxis--helping people avoid some commonly accepted questionable theories. Generally, recent philosophy has steered away from algorithms and deductivist approaches to ethical justification. In bioethics, philosophy works in partnership with a range of other disciplines, including pediatrics and neurology. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  1. Developing Wide-Field Spatio-Spectral Interferometry for Far-Infrared Space Applications

    NASA Technical Reports Server (NTRS)

    Leisawitz, David; Bolcar, Matthew R.; Lyon, Richard G.; Maher, Stephen F.; Memarsadeghi, Nargess; Rinehart, Stephen A.; Sinukoff, Evan J.

    2012-01-01

    Interferometry is an affordable way to bring the benefits of high resolution to space far-IR astrophysics. We summarize an ongoing effort to develop and learn the practical limitations of an interferometric technique that will enable the acquisition of high-resolution far-IR integral field spectroscopic data with a single instrument in a future space-based interferometer. This technique was central to the Space Infrared Interferometric Telescope (SPIRIT) and Submillimeter Probe of the Evolution of Cosmic Structure (SPECS) space mission design concepts, and it will first be used on the Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII). Our experimental approach combines data from a laboratory optical interferometer (the Wide-field Imaging Interferometry Testbed, WIIT), computational optical system modeling, and spatio-spectral synthesis algorithm development. We summarize recent experimental results and future plans.

  2. Alternative Speech Communication System for Persons with Severe Speech Disorders

    NASA Astrophysics Data System (ADS)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  3. Synthesis of novel stable compounds in the phosphorous-nitrogen system under pressure

    NASA Astrophysics Data System (ADS)

    Stavrou, Elissaios; Batyrev, Iskander; Ciezak-Jenkins, Jennifer; Grivickas, Paulius; Zaug, Joseph; Greenberg, Eran; Kunz, Martin

    2017-06-01

    We explore the possible formation of stable, and metastable at ambient conditions, polynitrogen compounds in the P-N system under pressure using in situ X-ray diffraction and Raman spectroscopy in synergy with first-principles evolutionary structural search algorithms (USPEX). We have performed numerous synthesis experiments at pressures from near ambient up to +50 GPa using both a mixture of elemental P and N2 and relevant precursors such as P3N5. Calculation of P-N extended structures at 10, 30, and 50 GPa was done using USPEX based on density functional theory (DFT) plane-waves calculations (VASP) with ultrasoft pseudopotentials. Full convex plot was found for N rich concentrations of P-N binary system. Variable content calculations were complemented by fixed concentration calculations at certain nitrogen rich concentration. Stable structures refined by DFT calculations using norm-concerning pseudopotentials. A comparison between our results and previous studies in the same system will be also given. Part of this work was performed under the auspices of the U. S. DoE by LLNS, LLC under Contract DE-AC52-07NA27344. We thank the Joint DoD/DOE Munitions Technology Development Program and the HE science C-II program at LLNL for supporting this study.

  4. Medical follow-up of workers exposed to lung carcinogens: French evidence-based and pragmatic recommendations.

    PubMed

    Delva, Fleur; Margery, Jacques; Laurent, François; Petitprez, Karine; Pairon, Jean-Claude

    2017-02-14

    The aim of this work was to establish recommendations for the medical follow-up of workers currently or previously exposed to lung carcinogens. A critical synthesis of the literature was conducted. Occupational lung carcinogenic substances were listed and classified according to their level of lung cancer risk. A targeted screening protocol was defined. A clinical trial, National Lung Screnning Trial (NLST), showed the efficacy of chest CAT scan (CT) screening for populations of smokers aged 55-74 years with over 30 pack-years of exposure who had stopped smoking for less than 15 years. To propose screening in accordance with NLST criteria, and to account for occupational risk factors, screening among smokers and former smokers needs to consider the types of occupational exposure for which the risk level is at least equivalent to the risk of the subjects included in the NLST. The working group proposes an algorithm that estimates the relative risk of each occupational lung carcinogen, taking into account exposure to tobacco, based on available data from the literature. Given the lack of data on bronchopulmonary cancer (BPC) screening in occupationally exposed workers, the working group proposed implementing a screening experiment for bronchopulmonary cancer in subjects occupationally exposed or having been occupationally exposed to lung carcinogens who are confirmed as having high risk factors for BPC. A specific algorithm is proposed to determine the level of risk of BPC, taking into account the different occupational lung carcinogens and tobacco smoking at the individual level.

  5. A simulation based optimization approach to model and design life support systems for manned space missions

    NASA Astrophysics Data System (ADS)

    Aydogan, Selen

    This dissertation considers the problem of process synthesis and design of life-support systems for manned space missions. A life-support system is a set of technologies to support human life for short and long-term spaceflights, via providing the basic life-support elements, such as oxygen, potable water, and food. The design of the system needs to meet the crewmember demand for the basic life-support elements (products of the system) and it must process the loads generated by the crewmembers. The system is subject to a myriad of uncertainties because most of the technologies involved are still under development. The result is high levels of uncertainties in the estimates of the model parameters, such as recovery rates or process efficiencies. Moreover, due to the high recycle rates within the system, the uncertainties are amplified and propagated within the system, resulting in a complex problem. In this dissertation, two algorithms have been successfully developed to help making design decisions for life-support systems. The algorithms utilize a simulation-based optimization approach that combines a stochastic discrete-event simulation and a deterministic mathematical programming approach to generate multiple, unique realizations of the controlled evolution of the system. The timelines are analyzed using time series data mining techniques and statistical tools to determine the necessary technologies, their deployment schedules and capacities, and the necessary basic life-support element amounts to support crew life and activities for the mission duration.

  6. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  7. Personalized recommendation via unbalance full-connectivity inference

    NASA Astrophysics Data System (ADS)

    Ma, Wenping; Ren, Chen; Wu, Yue; Wang, Shanfeng; Feng, Xiang

    2017-10-01

    Recommender systems play an important role to help us to find useful information. They are widely used by most e-commerce web sites to push the potential items to individual user according to purchase history. Network-based recommendation algorithms are popular and effective in recommendation, which use two types of elements to represent users and items respectively. In this paper, based on consistence-based inference (CBI) algorithm, we propose a novel network-based algorithm, in which users and items are recognized with no difference. The proposed algorithm also uses information diffusion to find the relationship between users and items. Different from traditional network-based recommendation algorithms, information diffusion initializes from users and items, respectively. Experiments show that the proposed algorithm is effective compared with traditional network-based recommendation algorithms.

  8. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    PubMed

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  9. The Impact of a Line Probe Assay Based Diagnostic Algorithm on Time to Treatment Initiation and Treatment Outcomes for Multidrug Resistant TB Patients in Arkhangelsk Region, Russia.

    PubMed

    Eliseev, Platon; Balantcev, Grigory; Nikishova, Elena; Gaida, Anastasia; Bogdanova, Elena; Enarson, Donald; Ornstein, Tara; Detjen, Anne; Dacombe, Russell; Gospodarevskaya, Elena; Phillips, Patrick P J; Mann, Gillian; Squire, Stephen Bertel; Mariandyshev, Andrei

    2016-01-01

    In the Arkhangelsk region of Northern Russia, multidrug-resistant (MDR) tuberculosis (TB) rates in new cases are amongst the highest in the world. In 2014, MDR-TB rates reached 31.7% among new cases and 56.9% among retreatment cases. The development of new diagnostic tools allows for faster detection of both TB and MDR-TB and should lead to reduced transmission by earlier initiation of anti-TB therapy. The PROVE-IT (Policy Relevant Outcomes from Validating Evidence on Impact) Russia study aimed to assess the impact of the implementation of line probe assay (LPA) as part of an LPA-based diagnostic algorithm for patients with presumptive MDR-TB focusing on time to treatment initiation with time from first-care seeking visit to the initiation of MDR-TB treatment rather than diagnostic accuracy as the primary outcome, and to assess treatment outcomes. We hypothesized that the implementation of LPA would result in faster time to treatment initiation and better treatment outcomes. A culture-based diagnostic algorithm used prior to LPA implementation was compared to an LPA-based algorithm that replaced BacTAlert and Löwenstein Jensen (LJ) for drug sensitivity testing. A total of 295 MDR-TB patients were included in the study, 163 diagnosed with the culture-based algorithm, 132 with the LPA-based algorithm. Among smear positive patients, the implementation of the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 50 and 66 days compared to the culture-based algorithm (BacTAlert and LJ respectively, p<0.001). In smear negative patients, the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 78 days when compared to the culture-based algorithm (LJ, p<0.001). However, several weeks were still needed for treatment initiation in LPA-based algorithm, 24 days in smear positive, and 62 days in smear negative patients. Overall treatment outcomes were better in LPA-based algorithm compared to culture-based algorithm (p = 0.003). Treatment success rates at 20 months of treatment were higher in patients diagnosed with the LPA-based algorithm (65.2%) as compared to those diagnosed with the culture-based algorithm (44.8%). Mortality was also lower in the LPA-based algorithm group (7.6%) compared to the culture-based algorithm group (15.9%). There was no statistically significant difference in smear and culture conversion rates between the two algorithms. The results of the study suggest that the introduction of LPA leads to faster time to MDR diagnosis and earlier treatment initiation as well as better treatment outcomes for patients with MDR-TB. These findings also highlight the need for further improvements within the health system to reduce both patient and diagnostic delays to truly optimize the impact of new, rapid diagnostics.

  10. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  11. A review of classification algorithms for EEG-based brain-computer interfaces.

    PubMed

    Lotte, F; Congedo, M; Lécuyer, A; Lamarche, F; Arnaldi, B

    2007-06-01

    In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.

  12. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  13. Semantic modeling and structural synthesis of onboard electronics protection means as open information system

    NASA Astrophysics Data System (ADS)

    Zhevnerchuk, D. V.; Surkova, A. S.; Lomakina, L. S.; Golubev, A. S.

    2018-05-01

    The article describes the component representation approach and semantic models of on-board electronics protection from ionizing radiation of various nature. Semantic models are constructed, the feature of which is the representation of electronic elements, protection modules, sources of impact in the form of blocks with interfaces. The rules of logical inference and algorithms for synthesizing the object properties of the semantic network, imitating the interface between the components of the protection system and the sources of radiation, are developed. The results of the algorithm are considered using the example of radiation-resistant microcircuits 1645RU5U, 1645RT2U and the calculation and experimental method for estimating the durability of on-board electronics.

  14. Surgical motion characterization in simulated needle insertion procedures

    NASA Astrophysics Data System (ADS)

    Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor

    2012-02-01

    PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.

  15. Soft context clustering for F0 modeling in HMM-based speech synthesis

    NASA Astrophysics Data System (ADS)

    Khorram, Soheil; Sameti, Hossein; King, Simon

    2015-12-01

    This paper proposes the use of a new binary decision tree, which we call a soft decision tree, to improve generalization performance compared to the conventional `hard' decision tree method that is used to cluster context-dependent model parameters in statistical parametric speech synthesis. We apply the method to improve the modeling of fundamental frequency, which is an important factor in synthesizing natural-sounding high-quality speech. Conventionally, hard decision tree-clustered hidden Markov models (HMMs) are used, in which each model parameter is assigned to a single leaf node. However, this `divide-and-conquer' approach leads to data sparsity, with the consequence that it suffers from poor generalization, meaning that it is unable to accurately predict parameters for models of unseen contexts: the hard decision tree is a weak function approximator. To alleviate this, we propose the soft decision tree, which is a binary decision tree with soft decisions at the internal nodes. In this soft clustering method, internal nodes select both their children with certain membership degrees; therefore, each node can be viewed as a fuzzy set with a context-dependent membership function. The soft decision tree improves model generalization and provides a superior function approximator because it is able to assign each context to several overlapped leaves. In order to use such a soft decision tree to predict the parameters of the HMM output probability distribution, we derive the smoothest (maximum entropy) distribution which captures all partial first-order moments and a global second-order moment of the training samples. Employing such a soft decision tree architecture with maximum entropy distributions, a novel speech synthesis system is trained using maximum likelihood (ML) parameter re-estimation and synthesis is achieved via maximum output probability parameter generation. In addition, a soft decision tree construction algorithm optimizing a log-likelihood measure is developed. Both subjective and objective evaluations were conducted and indicate a considerable improvement over the conventional method.

  16. WS-BP: An efficient wolf search based back-propagation algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Rehman, M. Z.; Khan, Abdullah

    2015-05-01

    Wolf Search (WS) is a heuristic based optimization algorithm. Inspired by the preying and survival capabilities of the wolves, this algorithm is highly capable to search large spaces in the candidate solutions. This paper investigates the use of WS algorithm in combination with back-propagation neural network (BPNN) algorithm to overcome the local minima problem and to improve convergence in gradient descent. The performance of the proposed Wolf Search based Back-Propagation (WS-BP) algorithm is compared with Artificial Bee Colony Back-Propagation (ABC-BP), Bat Based Back-Propagation (Bat-BP), and conventional BPNN algorithms. Specifically, OR and XOR datasets are used for training the network. The simulation results show that the WS-BP algorithm effectively avoids the local minima and converge to global minima.

  17. Evaluation of the performance of existing non-laboratory based cardiovascular risk assessment algorithms

    PubMed Central

    2013-01-01

    Background The high burden and rising incidence of cardiovascular disease (CVD) in resource constrained countries necessitates implementation of robust and pragmatic primary and secondary prevention strategies. Many current CVD management guidelines recommend absolute cardiovascular (CV) risk assessment as a clinically sound guide to preventive and treatment strategies. Development of non-laboratory based cardiovascular risk assessment algorithms enable absolute risk assessment in resource constrained countries. The objective of this review is to evaluate the performance of existing non-laboratory based CV risk assessment algorithms using the benchmarks for clinically useful CV risk assessment algorithms outlined by Cooney and colleagues. Methods A literature search to identify non-laboratory based risk prediction algorithms was performed in MEDLINE, CINAHL, Ovid Premier Nursing Journals Plus, and PubMed databases. The identified algorithms were evaluated using the benchmarks for clinically useful cardiovascular risk assessment algorithms outlined by Cooney and colleagues. Results Five non-laboratory based CV risk assessment algorithms were identified. The Gaziano and Framingham algorithms met the criteria for appropriateness of statistical methods used to derive the algorithms and endpoints. The Swedish Consultation, Framingham and Gaziano algorithms demonstrated good discrimination in derivation datasets. Only the Gaziano algorithm was externally validated where it had optimal discrimination. The Gaziano and WHO algorithms had chart formats which made them simple and user friendly for clinical application. Conclusion Both the Gaziano and Framingham non-laboratory based algorithms met most of the criteria outlined by Cooney and colleagues. External validation of the algorithms in diverse samples is needed to ascertain their performance and applicability to different populations and to enhance clinicians’ confidence in them. PMID:24373202

  18. Range image registration based on hash map and moth-flame optimization

    NASA Astrophysics Data System (ADS)

    Zou, Li; Ge, Baozhen; Chen, Lei

    2018-03-01

    Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.

  19. Investigating ego modules and pathways in osteosarcoma by integrating the EgoNet algorithm and pathway analysis.

    PubMed

    Chen, X Y; Chen, Y H; Zhang, L J; Wang, Y; Tong, Z C

    2017-02-16

    Osteosarcoma (OS) is the most common primary bone malignancy, but current therapies are far from effective for all patients. A better understanding of the pathological mechanism of OS may help to achieve new treatments for this tumor. Hence, the objective of this study was to investigate ego modules and pathways in OS utilizing EgoNet algorithm and pathway-related analysis, and reveal pathological mechanisms underlying OS. The EgoNet algorithm comprises four steps: constructing background protein-protein interaction (PPI) network (PPIN) based on gene expression data and PPI data; extracting differential expression network (DEN) from the background PPIN; identifying ego genes according to topological features of genes in reweighted DEN; and collecting ego modules using module search by ego gene expansion. Consequently, we obtained 5 ego modules (Modules 2, 3, 4, 5, and 6) in total. After applying the permutation test, all presented statistical significance between OS and normal controls. Finally, pathway enrichment analysis combined with Reactome pathway database was performed to investigate pathways, and Fisher's exact test was conducted to capture ego pathways for OS. The ego pathway for Module 2 was CLEC7A/inflammasome pathway, while for Module 3 a tetrasaccharide linker sequence was required for glycosaminoglycan (GAG) synthesis, and for Module 6 was the Rho GTPase cycle. Interestingly, genes in Modules 4 and 5 were enriched in the same pathway, the 2-LTR circle formation. In conclusion, the ego modules and pathways might be potential biomarkers for OS therapeutic index, and give great insight of the molecular mechanism underlying this tumor.

  20. Investigating ego modules and pathways in osteosarcoma by integrating the EgoNet algorithm and pathway analysis

    PubMed Central

    Chen, X.Y.; Chen, Y.H.; Zhang, L.J.; Wang, Y.; Tong, Z.C.

    2017-01-01

    Osteosarcoma (OS) is the most common primary bone malignancy, but current therapies are far from effective for all patients. A better understanding of the pathological mechanism of OS may help to achieve new treatments for this tumor. Hence, the objective of this study was to investigate ego modules and pathways in OS utilizing EgoNet algorithm and pathway-related analysis, and reveal pathological mechanisms underlying OS. The EgoNet algorithm comprises four steps: constructing background protein-protein interaction (PPI) network (PPIN) based on gene expression data and PPI data; extracting differential expression network (DEN) from the background PPIN; identifying ego genes according to topological features of genes in reweighted DEN; and collecting ego modules using module search by ego gene expansion. Consequently, we obtained 5 ego modules (Modules 2, 3, 4, 5, and 6) in total. After applying the permutation test, all presented statistical significance between OS and normal controls. Finally, pathway enrichment analysis combined with Reactome pathway database was performed to investigate pathways, and Fisher's exact test was conducted to capture ego pathways for OS. The ego pathway for Module 2 was CLEC7A/inflammasome pathway, while for Module 3 a tetrasaccharide linker sequence was required for glycosaminoglycan (GAG) synthesis, and for Module 6 was the Rho GTPase cycle. Interestingly, genes in Modules 4 and 5 were enriched in the same pathway, the 2-LTR circle formation. In conclusion, the ego modules and pathways might be potential biomarkers for OS therapeutic index, and give great insight of the molecular mechanism underlying this tumor. PMID:28225867

  1. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    NASA Astrophysics Data System (ADS)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  2. Precise and efficient evaluation of gravimetric quantities at arbitrarily scattered points in space

    NASA Astrophysics Data System (ADS)

    Ivanov, Kamen G.; Pavlis, Nikolaos K.; Petrushev, Pencho

    2017-12-01

    Gravimetric quantities are commonly represented in terms of high degree surface or solid spherical harmonics. After EGM2008, such expansions routinely extend to spherical harmonic degree 2190, which makes the computation of gravimetric quantities at a large number of arbitrarily scattered points in space using harmonic synthesis, a very computationally demanding process. We present here the development of an algorithm and its associated software for the efficient and precise evaluation of gravimetric quantities, represented in high degree solid spherical harmonics, at arbitrarily scattered points in the space exterior to the surface of the Earth. The new algorithm is based on representation of the quantities of interest in solid ellipsoidal harmonics and application of the tensor product trigonometric needlets. A FORTRAN implementation of this algorithm has been developed and extensively tested. The capabilities of the code are demonstrated using as examples the disturbing potential T, height anomaly ζ , gravity anomaly Δ g , gravity disturbance δ g , north-south deflection of the vertical ξ , east-west deflection of the vertical η , and the second radial derivative T_{rr} of the disturbing potential. After a pre-computational step that takes between 1 and 2 h per quantity, the current version of the software is capable of computing on a standard PC each of these quantities in the range from the surface of the Earth up to 544 km above that surface at speeds between 20,000 and 40,000 point evaluations per second, depending on the gravimetric quantity being evaluated, while the relative error does not exceed 10^{-6} and the memory (RAM) use is 9.3 GB.

  3. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  4. Classification and recognition of dynamical models: the role of phase, independent components, kernels and optimal transport.

    PubMed

    Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano

    2007-11-01

    We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.

  5. Automated identification of retained surgical items in radiological images

    NASA Astrophysics Data System (ADS)

    Agam, Gady; Gan, Lin; Moric, Mario; Gluncic, Vicko

    2015-03-01

    Retained surgical items (RSIs) in patients is a major operating room (OR) patient safety concern. An RSI is any surgical tool, sponge, needle or other item inadvertently left in a patients body during the course of surgery. If left undetected, RSIs may lead to serious negative health consequences such as sepsis, internal bleeding, and even death. To help physicians efficiently and effectively detect RSIs, we are developing computer-aided detection (CADe) software for X-ray (XR) image analysis, utilizing large amounts of currently available image data to produce a clinically effective RSI detection system. Physician analysis of XRs for the purpose of RSI detection is a relatively lengthy process that may take up to 45 minutes to complete. It is also error prone due to the relatively low acuity of the human eye for RSIs in XR images. The system we are developing is based on computer vision and machine learning algorithms. We address the problem of low incidence by proposing synthesis algorithms. The CADe software we are developing may be integrated into a picture archiving and communication system (PACS), be implemented as a stand-alone software application, or be integrated into portable XR machine software through application programming interfaces. Preliminary experimental results on actual XR images demonstrate the effectiveness of the proposed approach.

  6. Metabolic engineering of the pentose phosphate pathway for enhanced limonene production in the cyanobacterium Synechocysti s sp. PCC 6803.

    PubMed

    Lin, Po-Cheng; Saha, Rajib; Zhang, Fuzhong; Pakrasi, Himadri B

    2017-12-13

    Isoprenoids are diverse natural compounds, which have various applications as pharmaceuticals, fragrances, and solvents. The low yield of isoprenoids in plants makes them difficult for cost-effective production, and chemical synthesis of complex isoprenoids is impractical. Microbial production of isoprenoids has been considered as a promising approach to increase the yield. In this study, we engineered the model cyanobacterium Synechocystis sp. PCC 6803 for sustainable production of a commercially valuable isoprenoid, limonene. Limonene synthases from the plants Mentha spicata and Citrus limon were expressed in cyanobacteria for limonene production. Production of limonene was two-fold higher with limonene synthase from M. spicata than that from C. limon. To enhance isoprenoid production, computational strain design was conducted by applying the OptForce strain design algorithm on Synechocystis 6803. Based on the metabolic interventions suggested by this algorithm, genes (ribose 5-phosphate isomerase and ribulose 5-phosphate 3-epimerase) in the pentose phosphate pathway were overexpressed, and a geranyl diphosphate synthase from the plant Abies grandis was expressed to optimize the limonene biosynthetic pathway. The optimized strain produced 6.7 mg/L of limonene, a 2.3-fold improvement in productivity. Thus, this study presents a feasible strategy to engineer cyanobacteria for photosynthetic production of isoprenoids.

  7. Real-time stylistic prediction for whole-body human motions.

    PubMed

    Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun

    2012-01-01

    The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Proof-Term Synthesis on Dependent-Type Systems via Explicit Substitutions

    DTIC Science & Technology

    1999-11-01

    oriented functional language OCaml , in about 50 lines. We have also implemented a higher-order unification algorithm for ground expressions. The soundness... OCaml , and it is electronically available by contacting the author. The underlying theory of the method proposed here is the An^-calculus. We believe...CORNES, Conception d’un langage de haut niveau de representation de preuves: recurrence par filtrage de motifs, unification en presence de types

  9. Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 2

    NASA Technical Reports Server (NTRS)

    Lea, Robert N. (Editor); Villarreal, James A. (Editor)

    1991-01-01

    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Texas, Houston. Topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.

  10. MEASURING OF PROTEIN SYNTHESIS USING METABOLIC 2H-LABELING, HIGH-RESOLUTION MASS SPECTROMETRY AND AN ALGORITHM

    PubMed Central

    Kasumov, Takhar; Ilchenko, Sergey; Li, Ling; Rachdaoui, Nadia; Sadigov, Rovshan; Willard, Belinda; McCullough, Arthur J.; Previs, Stephen

    2013-01-01

    We recently developed a method for estimating protin dynamics in vivo with 2H2O using MALDI-TOF MS (Rachdaoui N. et al., MCP, 8, 2653-2662, 2009) and we confirmed that 2H-labeling of many hepatic free amino acids rapidly equilibrated with body water. Although this is a reliable method, it required modest sample purification and necessitated the determination of tissue-specific amino acid labeling. Another approach for quantifying protein kinetics is to measure the 2H-enrichments of body water (precursor) and protein-bound amino acid or proteolytic peptide (product) and to estimate how many copies of deuterium are incorporated into a product. In this study we have used nanospray LTQ-FTICR mass spectrometry to simultaneously measure the isotopic enrichment of peptides and protein-bound amino acids. A mathematical algorithm was developed to aid the data processing. The most notable improvement centers on the fact that the precursor:product labeling ratio can be obtained by measuring the labeling of water and a protein(s) (or peptides) of interest, therein minimizing the need to measure the amino acid labeling. As a proof of principle, we demonstrate that this approach can detect the effect of nutritional status on albumin synthesis in rats given 2H2O. PMID:21256107

  11. Analysis and synthesis of intonation using the Tilt model.

    PubMed

    Taylor, P

    2000-03-01

    This paper introduces the Tilt intonational model and describes how this model can be used to automatically analyze and synthesize intonation. In the model, intonation is represented as a linear sequence of events, which can be pitch accents or boundary tones. Each event is characterized by continuous parameters representing amplitude, duration, and tilt (a measure of the shape of the event). The paper describes an event detector, in effect an intonational recognition system, which produces a transcription of an utterance's intonation. The features and parameters of the event detector are discussed and performance figures are shown on a variety of read and spontaneous speaker independent conversational speech databases. Given the event locations, algorithms are described which produce an automatic analysis of each event in terms of the Tilt parameters. Synthesis algorithms are also presented which generate F0 contours from Tilt representations. The accuracy of these is shown by comparing synthetic F0 contours to real F0 contours. The paper concludes with an extensive discussion on linguistic representations of intonation and gives evidence that the Tilt model goes a long way to satisfying the desired goals of such a representation in that it has the right number of degrees of freedom to be able to describe and synthesize intonation accurately.

  12. Delay test generation for synchronous sequential circuits

    NASA Astrophysics Data System (ADS)

    Devadas, Srinivas

    1989-05-01

    We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.

  13. Peak-to-average power ratio reduction in orthogonal frequency division multiplexing-based visible light communication systems using a modified partial transmit sequence technique

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen

    2018-01-01

    We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.

  14. On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh

    2014-07-01

    We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for themore » parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.« less

  15. An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.

    PubMed

    Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin

    2016-12-01

    Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.

  16. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    NASA Astrophysics Data System (ADS)

    Staley, T. D.; Anderson, G. E.

    2015-11-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.

  17. A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem

    NASA Astrophysics Data System (ADS)

    Jäger, Gerold; Zhang, Weixiong

    The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.

  18. Assessment of cardiac proteome dynamics with heavy water: slower protein synthesis rates in interfibrillar than subsarcolemmal mitochondria

    PubMed Central

    Dabkowski, Erinne R.; Shekar, Kadambari Chandra; Li, Ling; Ribeiro, Rogerio F.; Walsh, Kenneth; Previs, Stephen F.; Sadygov, Rovshan G.; Willard, Belinda; Stanley, William C.

    2013-01-01

    Traditional proteomics provides static assessment of protein content, but not synthetic rates. Recently, proteome dynamics with heavy water (2H2O) was introduced, where 2H labels amino acids that are incorporated into proteins, and the synthesis rate of individual proteins is calculated using mass isotopomer distribution analysis. We refine this approach with a novel algorithm and rigorous selection criteria that improve the accuracy and precision of the calculation of synthesis rates and use it to measure protein kinetics in spatially distinct cardiac mitochondrial subpopulations. Subsarcolemmal mitochondria (SSM) and interfibrillar mitochondria (IFM) were isolated from adult rats, which were given 2H2O in the drinking water for up to 60 days. Plasma 2H2O and myocardial 2H-enrichment of amino acids were stable throughout the experimental protocol. Multiple tryptic peptides were identified from 28 proteins in both SSM and IFM and showed a time-dependent increase in heavy mass isotopomers that was consistent within a given protein. Mitochondrial protein synthesis was relatively slow (average half-life of 30 days, 2.4% per day). Although the synthesis rates for individual proteins were correlated between IFM and SSM (R2 = 0.84; P < 0.0001), values in IFM were 15% less than SSM (P < 0.001). In conclusion, administration of 2H2O results in stable enrichment of the cardiac precursor amino acid pool, with the use of refined analytical and computational methods coupled with cell fractionation one can measure synthesis rates for cardiac proteins in subcellular compartments in vivo, and protein synthesis is slower in mitochondria located among the myofibrils than in the subsarcolemmal region. PMID:23457012

  19. A PLM-based automated inspection planning system for coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Zhao, Haibin; Wang, Junying; Wang, Boxiong; Wang, Jianmei; Chen, Huacheng

    2006-11-01

    With rapid progress of Product Lifecycle Management (PLM) in manufacturing industry, automatic generation of inspection planning of product and the integration with other activities in product lifecycle play important roles in quality control. But the techniques for these purposes are laggard comparing with techniques of CAD/CAM. Therefore, an automatic inspection planning system for Coordinate Measuring Machine (CMM) was developed to improve the automatization of measuring based on the integration of inspection system in PLM. Feature information representation is achieved based on a PLM canter database; measuring strategy is optimized through the integration of multi-sensors; reasonable number and distribution of inspection points are calculated and designed with the guidance of statistic theory and a synthesis distribution algorithm; a collision avoidance method is proposed to generate non-collision inspection path with high efficiency. Information mapping is performed between Neutral Interchange Files (NIFs), such as STEP, DML, DMIS, XML, etc., to realize information integration with other activities in the product lifecycle like design, manufacturing and inspection execution, etc. Simulation was carried out to demonstrate the feasibility of the proposed system. As a result, the inspection process is becoming simpler and good result can be got based on the integration in PLM.

  20. Automated recognition of helium speech. Phase I: Investigation of microprocessor based analysis/synthesis system

    NASA Astrophysics Data System (ADS)

    Jelinek, H. J.

    1986-01-01

    This is the Final Report of Electronic Design Associates on its Phase I SBIR project. The purpose of this project is to develop a method for correcting helium speech, as experienced in diver-surface communication. The goal of the Phase I study was to design, prototype, and evaluate a real time helium speech corrector system based upon digital signal processing techniques. The general approach was to develop hardware (an IBM PC board) to digitize helium speech and software (a LAMBDA computer based simulation) to translate the speech. As planned in the study proposal, this initial prototype may now be used to assess expected performance from a self contained real time system which uses an identical algorithm. The Final Report details the work carried out to produce the prototype system. Four major project tasks were: a signal processing scheme for converting helium speech to normal sounding speech was generated. The signal processing scheme was simulated on a general purpose (LAMDA) computer. Actual helium speech was supplied to the simulation and the converted speech was generated. An IBM-PC based 14 bit data Input/Output board was designed and built. A bibliography of references on speech processing was generated.

  1. Fabrication of digital rainbow holograms and 3-D imaging using SEM based e-beam lithography.

    PubMed

    Firsov, An; Firsov, A; Loechel, B; Erko, A; Svintsov, A; Zaitsev, S

    2014-11-17

    Here we present an approach for creating full-color digital rainbow holograms based on mixing three basic colors. Much like in a color TV with three luminescent points per single screen pixel, each color pixel of initial image is presented by three (R, G, B) distinct diffractive gratings in a hologram structure. Change of either duty cycle or area of the gratings are used to provide proper R, G, B intensities. Special algorithms allow one to design rather complicated 3D images (that might even be replacing each other with hologram rotation). The software developed ("RainBow") provides stability of colorization of rotated image by means of equalizing of angular blur from gratings responsible for R, G, B basic colors. The approach based on R, G, B color synthesis allows one to fabricate gray-tone rainbow hologram containing white color what is hardly possible in traditional dot-matrix technology. Budgetary electron beam lithography based on SEM column was used to fabricate practical examples of digital rainbow hologram. The results of fabrication of large rainbow holograms from design to imprinting are presented. Advantages of the EBL in comparison to traditional optical (dot-matrix) technology is considered.

  2. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †

    PubMed Central

    Kiku, Daisuke; Okutomi, Masatoshi

    2017-01-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407

  3. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    PubMed

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  4. Diversity-Oriented Synthesis as a Strategy for Fragment Evolution against GSK3β.

    PubMed

    Wang, Yikai; Wach, Jean-Yves; Sheehan, Patrick; Zhong, Cheng; Zhan, Chenyang; Harris, Richard; Almo, Steven C; Bishop, Joshua; Haggarty, Stephen J; Ramek, Alexander; Berry, Kayla N; O'Herin, Conor; Koehler, Angela N; Hung, Alvin W; Young, Damian W

    2016-09-08

    Traditional fragment-based drug discovery (FBDD) relies heavily on structural analysis of the hits bound to their targets. Herein, we present a complementary approach based on diversity-oriented synthesis (DOS). A DOS-based fragment collection was able to produce initial hit compounds against the target GSK3β, allow the systematic synthesis of related fragment analogues to explore fragment-level structure-activity relationship, and finally lead to the synthesis of a more potent compound.

  5. Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun

    2017-02-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.

  6. A comparison of two adaptive algorithms for the control of active engine mounts

    NASA Astrophysics Data System (ADS)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  7. Cryptanalysis of "an improvement over an image encryption method based on total shuffling"

    NASA Astrophysics Data System (ADS)

    Akhavan, A.; Samsudin, A.; Akhshani, A.

    2015-09-01

    In the past two decades, several image encryption algorithms based on chaotic systems had been proposed. Many of the proposed algorithms are meant to improve other chaos based and conventional cryptographic algorithms. Whereas, many of the proposed improvement methods suffer from serious security problems. In this paper, the security of the recently proposed improvement method for a chaos-based image encryption algorithm is analyzed. The results indicate the weakness of the analyzed algorithm against chosen plain-text.

  8. Dual methods and approximation concepts in structural synthesis

    NASA Technical Reports Server (NTRS)

    Fleury, C.; Schmit, L. A., Jr.

    1980-01-01

    Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.

  9. Optimal cooperative control synthesis of active displays

    NASA Technical Reports Server (NTRS)

    Garg, S.; Schmidt, D. K.

    1985-01-01

    A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.

  10. ACCESS 3. Approximation concepts code for efficient structural synthesis: User's guide

    NASA Technical Reports Server (NTRS)

    Fleury, C.; Schmit, L. A., Jr.

    1980-01-01

    A user's guide is presented for ACCESS-3, a research oriented program which combines dual methods and a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and dual algorithms of mathematical programming are applied in the design optimization procedure. This program retains all of the ACCESS-2 capabilities and the data preparation formats are fully compatible. Four distinct optimizer options were added: interior point penalty function method (NEWSUMT); second order primal projection method (PRIMAL2); second order Newton-type dual method (DUAL2); and first order gradient projection-type dual method (DUAL1). A pure discrete and mixed continuous-discrete design variable capability, and zero order approximation of the stress constraints are also included.

  11. Objective speech quality assessment and the RPE-LTP coding algorithm in different noise and language conditions.

    PubMed

    Hansen, J H; Nandkumar, S

    1995-01-01

    The formulation of reliable signal processing algorithms for speech coding and synthesis require the selection of a prior criterion of performance. Though coding efficiency (bits/second) or computational requirements can be used, a final performance measure must always include speech quality. In this paper, three objective speech quality measures are considered with respect to quality assessment for American English, noisy American English, and noise-free versions of seven languages. The purpose is to determine whether objective quality measures can be used to quantify changes in quality for a given voice coding method, with a known subjective performance level, as background noise or language conditions are changed. The speech coding algorithm chosen is regular-pulse excitation with long-term prediction (RPE-LTP), which has been chosen as the standard voice compression algorithm for the European Digital Mobile Radio system. Three areas are considered for objective quality assessment which include: (i) vocoder performance for American English in a noise-free environment, (ii) speech quality variation for three additive background noise sources, and (iii) noise-free performance for seven languages which include English, Japanese, Finnish, German, Hindi, Spanish, and French. It is suggested that although existing objective quality measures will never replace subjective testing, they can be a useful means of assessing changes in performance, identifying areas for improvement in algorithm design, and augmenting subjective quality tests for voice coding/compression algorithms in noise-free, noisy, and/or non-English applications.

  12. Parameterization of typhoon-induced ocean cooling using temperature equation and machine learning algorithms: an example of typhoon Soulik (2013)

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Jiang, Guo-Qing; Liu, Xin

    2017-09-01

    This study proposed three algorithms that can potentially be used to provide sea surface temperature (SST) conditions for typhoon prediction models. Different from traditional data assimilation approaches, which provide prescribed initial/boundary conditions, our proposed algorithms aim to resolve a flow-dependent SST feedback between growing typhoons and oceans in the future time. Two of these algorithms are based on linear temperature equations (TE-based), and the other is based on an innovative technique involving machine learning (ML-based). The algorithms are then implemented into a Weather Research and Forecasting model for the simulation of typhoon to assess their effectiveness, and the results show significant improvement in simulated storm intensities by including ocean cooling feedback. The TE-based algorithm I considers wind-induced ocean vertical mixing and upwelling processes only, and thus obtained a synoptic and relatively smooth sea surface temperature cooling. The TE-based algorithm II incorporates not only typhoon winds but also ocean information, and thus resolves more cooling features. The ML-based algorithm is based on a neural network, consisting of multiple layers of input variables and neurons, and produces the best estimate of the cooling structure, in terms of its amplitude and position. Sensitivity analysis indicated that the typhoon-induced ocean cooling is a nonlinear process involving interactions of multiple atmospheric and oceanic variables. Therefore, with an appropriate selection of input variables and neuron sizes, the ML-based algorithm appears to be more efficient in prognosing the typhoon-induced ocean cooling and in predicting typhoon intensity than those algorithms based on linear regression methods.

  13. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  14. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  15. Estimated generic prices for novel treatments for drug-resistant tuberculosis.

    PubMed

    Gotham, Dzintars; Fortunak, Joseph; Pozniak, Anton; Khoo, Saye; Cooke, Graham; Nytko, Frederick E; Hill, Andrew

    2017-04-01

    The estimated worldwide annual incidence of MDR-TB is 480 000, representing 5% of TB incidence, but 20% of mortality. Multiple drugs have recently been developed or repurposed for the treatment of MDR-TB. Currently, treatment for MDR-TB costs thousands of dollars per course. To estimate generic prices for novel TB drugs that would be achievable given large-scale competitive manufacture. Prices for linezolid, moxifloxacin and clofazimine were estimated based on per-kilogram prices of the active pharmaceutical ingredient (API). Other costs were added, including formulation, packaging and a profit margin. The projected costs for sutezolid were estimated to be equivalent to those for linezolid, based on chemical similarity. Generic prices for bedaquiline, delamanid and pretomanid were estimated by assessing routes of synthesis, costs/kg of chemical reagents, routes of synthesis and per-step yields. Costing algorithms reflected variable regulatory requirements and efficiency of scale based on demand, and were validated by testing predictive ability against widely available TB medicines. Estimated generic prices were US$8-$17/month for bedaquiline, $5-$16/month for delamanid, $11-$34/month for pretomanid, $4-$9/month for linezolid, $4-$9/month for sutezolid, $4-$11/month for clofazimine and $4-$8/month for moxifloxacin. The estimated generic prices were 87%-94% lower than the current lowest available prices for bedaquiline, 95%-98% for delamanid and 94%-97% for linezolid. Estimated generic prices were $168-$395 per course for the STREAM trial modified Bangladesh regimens (current costs $734-$1799), $53-$276 for pretomanid-based three-drug regimens and $238-$507 for a delamanid-based four-drug regimen. Competitive large-scale generic manufacture could allow supplies of treatment for 5-10 times more MDR-TB cases within current procurement budgets. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Diversity-Oriented Synthesis as a Strategy for Fragment Evolution against GSK3β

    PubMed Central

    2016-01-01

    Traditional fragment-based drug discovery (FBDD) relies heavily on structural analysis of the hits bound to their targets. Herein, we present a complementary approach based on diversity-oriented synthesis (DOS). A DOS-based fragment collection was able to produce initial hit compounds against the target GSK3β, allow the systematic synthesis of related fragment analogues to explore fragment-level structure–activity relationship, and finally lead to the synthesis of a more potent compound. PMID:27660690

  17. Research on compressive sensing reconstruction algorithm based on total variation model

    NASA Astrophysics Data System (ADS)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  18. Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Ren, Shengwei; Zhang, Li; Zhang, Shibing

    2016-10-01

    Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.

  19. Proceedings of the Second NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar (Editor)

    2010-01-01

    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.

  20. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    NASA Astrophysics Data System (ADS)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  1. Synthesis of 2D Metal Chalcogenide Thin Films through the Process Involving Solution-Phase Deposition.

    PubMed

    Giri, Anupam; Park, Gyeongbae; Yang, Heeseung; Pal, Monalisa; Kwak, Junghyeok; Jeong, Unyong

    2018-04-24

    2D metal chalcogenide thin films have recently attracted considerable attention owing to their unique physicochemical properties and great potential in a variety of applications. Synthesis of large-area 2D metal chalcogenide thin films in controllable ways remains a key challenge in this research field. Recently, the solution-based synthesis of 2D metal chalcogenide thin films has emerged as an alternative approach to vacuum-based synthesis because it is relatively simple and easy to scale up for high-throughput production. In addition, solution-based thin films open new opportunities that cannot be achieved from vacuum-based thin films. Here, a comprehensive summary regarding the basic structures and properties of different types of 2D metal chalcogenides, the mechanistic details of the chemical reactions in the synthesis of the metal chalcogenide thin films, recent successes in the synthesis by different reaction approaches, and the applications and potential uses is provided. In the last perspective section, the technical challenges to be overcome and the future research directions in the solution-based synthesis of 2D metal chalcogenides are discussed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A Collaborative Recommend Algorithm Based on Bipartite Community

    PubMed Central

    Fu, Yuchen; Liu, Quan; Cui, Zhiming

    2014-01-01

    The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393

  3. Abundance of enterovirus C in RD-L20B cell culture-negative stool samples from acute flaccid paralysis cases in Nigeria is geographically defined.

    PubMed

    Donbraye, Emmanuel; Olasunkanmi, Oluwatayo Israel; Opabode, Babatunde Ayoola; Ishola, Temitayo Rachael; Faleye, Temitope Oluwasegun Cephas; Adewumi, Olubusuyi Moses; Adeniji, Johnson Adekunle

    2018-06-01

    We recently showed that enteroviruses (EVs) andenterovirus species C (EV-C) in particular were abundant in faecal samples from children who had been diagnosed with acute flaccid paralysis (AFP) in Nigeria but declared to be EV-free by the RD-L20B cell culture-based algorithm. In this study, we investigated whether this observed preponderance of EVs (and EV-Cs) in such samples varies by geographical region. One hundred and eight samples (i.e. 54 paired stool suspensions from 54 AFP cases) that had previously been confirmed to be negative for EVs by the WHO-recommended RD-L20B cell culture-based algorithm were analysed. The 108 samples were made into 54 pools (27 each from North-West and South-South Nigeria). All were subjected to RNA extraction, cDNA synthesis and the WHO-recommended semi-nested PCR assay and its modifications. All of the amplicons were sequenced, and the enteroviruses identified, using the enterovirus genotyping tool and phylogenetic analysis. EVs were detected in 16 (29.63 %) of the 54 samples that were screened and successfully identified in 14 (25.93 %). Of these, 10 were from North-West and 4 were from South-South Nigeria. One (7.14 %), 2 (14.29 %) and 11 (78.57 %) of the strains detected were EV-A, EV-B and EV-C, respectively. The 10 strains from North-West Nigeria included 7 EV types, namely CV-A10, E29, CV-A13, CV-A17, CV-A19, CV-A24 and EV-C99. The four EV types recovered from South-South Nigeria were E31, CV-A1, EV-C99 and EV-C116. The results of this study showed that the presence of EVs and consequently EV-Cs in AFP samples declared to be EV-free by the RD-L20B cell culture-based algorithm varies by geographical region in Nigeria.

  4. A Cancer Gene Selection Algorithm Based on the K-S Test and CFS.

    PubMed

    Su, Qiang; Wang, Yina; Jiang, Xiaobing; Chen, Fuxue; Lu, Wen-Cong

    2017-01-01

    To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S) test and correlation-based feature selection (CFS) principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. We adopted support vector machines (SVM) as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR), and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.

  5. Synthesis of Ag and Au nanoparticles embedded in carbon film: Optical, crystalline and topography analysis

    NASA Astrophysics Data System (ADS)

    Gholamali, Hediyeh; Shafiekhani, Azizollah; Darabi, Elham; Elahi, Seyed Mohammad

    2018-03-01

    Atomic force microscopy (AFM) images give valuable information about surface roughness of thin films based on the results of power spectral density (PSD) through the fast Fourier transform (FFT) algorithms. In the present work, AFM data are studied for silver and gold nanoparticles (Ag NPs a-C: H and Au NPs a-C: H) embedded in amorphous hydrogenated carbon films and co-deposited on glass substrate via of RF-Sputtering and RF-Plasma Enhanced Chemical Vapor Deposition methods. Here, the working gas is acetylene and the targets are Ag and Au. While time and power are constant, the only variable parameter in this study is initial pressure. In addition, the crystalline structure of Ag NPs a-C: H and Au NPs a-C: H are studied using X-ray diffraction (XRD). UV-visible spectrophotometry will also investigate optical properties and localized surface plasmon resonance (LSPR) of samples.

  6. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C C

    The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less

  7. Structural evolution and properties of small-size thiol-protected gold nanoclusters

    NASA Astrophysics Data System (ADS)

    Ma, Miaomiao; Liu, Liren; Zhu, Hengjiang; Lu, Junzhe; Tan, Guiping

    2018-07-01

    Ligand-protected gold clusters are widely used in biosensors and catalysis. Understanding the structural evolution of these kinds of nanoclusters is important for experimental synthesis. Herein, based on the particle swarm optimisation algorithm and density functional theory method, we use [Au1(SH)2]n, [Au2(SH)3]n, [Au3(SH)4]n (n = 1-3) as basic units to research the structural evolution relationships from building blocks to the final whole structures. Results show that there is a 'line-ring-core' structural evolution pattern in the growth process of the nanoclusters. The core structures of the ligand-protected gold clusters consist of Au3, Au4, Au6 and Au7 atoms. The electronics and optics analysis reflects that stability and optical properties gradually enhance with increase in size. These results can be used to understand the initial growth stage and design new ligand-protected nanoclusters.

  8. Computer-aided system design

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  9. Hierarchical trie packet classification algorithm based on expectation-maximization clustering.

    PubMed

    Bi, Xia-An; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.

  10. A synthesis of light absorption properties of the Pan-Arctic Ocean: application to semi-analytical estimates of dissolved organic carbon concentrations from space

    NASA Astrophysics Data System (ADS)

    Matsuoka, A.; Babin, M.; Doxaran, D.; Hooker, S. B.; Mitchell, B. G.; Bélanger, S.; Bricaud, A.

    2013-11-01

    The light absorption coefficients of particulate and dissolved materials are the main factors determining the light propagation of the visible part of the spectrum and are, thus, important for developing ocean color algorithms. While these absorption properties have recently been documented by a few studies for the Arctic Ocean (e.g., Matsuoka et al., 2007, 2011; Ben Mustapha et al., 2012), the datasets used in the literature were sparse and individually insufficient to draw a general view of the basin-wide spatial and temporal variations in absorption. To achieve such a task, we built a large absorption database at the pan-Arctic scale by pooling the majority of published datasets and merging new datasets. Our results showed that the total non-water absorption coefficients measured in the Eastern Arctic Ocean (EAO; Siberian side) are significantly higher than in the Western Arctic Ocean (WAO; North American side). This higher absorption is explained by higher concentration of colored dissolved organic matter (CDOM) in watersheds on the Siberian side, which contains a large amount of dissolved organic carbon (DOC) compared to waters off North America. In contrast, the relationship between the phytoplankton absorption (aφ(λ)) and chlorophyll a (chl a) concentration in the EAO was not significantly different from that in the WAO. Because our semi-analytical CDOM absorption algorithm is based on chl a-specific aφ(λ) values (Matsuoka et al., 2013), this result indirectly suggests that CDOM absorption can be appropriately derived not only for the WAO but also for the EAO using ocean color data. Derived CDOM absorption values were reasonable compared to in situ measurements. By combining this algorithm with empirical DOC vs. CDOM relationships, a semi-analytical algorithm for estimating DOC concentrations for coastal waters at the Pan-Arctic scale is presented and applied to satellite ocean color data.

  11. A Synthesis of Light Absorption Properties of the Arctic Ocean: Application to Semi-analytical Estimates of Dissolved Organic Carbon Concentrations from Space

    NASA Technical Reports Server (NTRS)

    Matsuoka, A.; Babin, M.; Doxaran, D.; Hooker, S. B.; Mitchell, B. G.; Belanger, S.; Bricaud, A.

    2014-01-01

    The light absorption coefficients of particulate and dissolved materials are the main factors determining the light propagation of the visible part of the spectrum and are, thus, important for developing ocean color algorithms. While these absorption properties have recently been documented by a few studies for the Arctic Ocean [e.g., Matsuoka et al., 2007, 2011; Ben Mustapha et al., 2012], the datasets used in the literature were sparse and individually insufficient to draw a general view of the basin-wide spatial and temporal variations in absorption. To achieve such a task, we built a large absorption database at the pan-Arctic scale by pooling the majority of published datasets and merging new datasets. Our results showed that the total non-water absorption coefficients measured in the Eastern Arctic Ocean (EAO; Siberian side) are significantly higher 74 than in the Western Arctic Ocean (WAO; North American side). This higher absorption is explained 75 by higher concentration of colored dissolved organic matter (CDOM) in watersheds on the Siberian 76 side, which contains a large amount of dissolved organic carbon (DOC) compared to waters off 77 North America. In contrast, the relationship between the phytoplankton absorption (a()) and chlorophyll a (chl a) concentration in the EAO was not significantly different from that in the WAO. Because our semi-analytical CDOM absorption algorithm is based on chl a-specific a() values [Matsuoka et al., 2013], this result indirectly suggests that CDOM absorption can be appropriately erived not only for the WAO but also for the EAO using ocean color data. Derived CDOM absorption values were reasonable compared to in situ measurements. By combining this algorithm with empirical DOC versus CDOM relationships, a semi-analytical algorithm for estimating DOC concentrations for coastal waters at the Pan-Arctic scale is presented and applied to satellite ocean color data.

  12. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  13. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  14. Optimizing Controlling-Value-Based Power Gating with Gate Count and Switching Activity

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kimura, Shinji

    In this paper, a new heuristic algorithm is proposed to optimize the power domain clustering in controlling-value-based (CV-based) power gating technology. In this algorithm, both the switching activity of sleep signals (p) and the overall numbers of sleep gates (gate count, N) are considered, and the sum of the product of p and N is optimized. The algorithm effectively exerts the total power reduction obtained from the CV-based power gating. Even when the maximum depth is kept to be the same, the proposed algorithm can still achieve power reduction approximately 10% more than that of the prior algorithms. Furthermore, detailed comparison between the proposed heuristic algorithm and other possible heuristic algorithms are also presented. HSPICE simulation results show that over 26% of total power reduction can be obtained by using the new heuristic algorithm. In addition, the effect of dynamic power reduction through the CV-based power gating method and the delay overhead caused by the switching of sleep transistors are also shown in this paper.

  15. Hybrid employment recommendation algorithm based on Spark

    NASA Astrophysics Data System (ADS)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  16. About development of automation control systems

    NASA Astrophysics Data System (ADS)

    Myshlyaev, L. P.; Wenger, K. G.; Ivushkin, K. A.; Makarov, V. N.

    2018-05-01

    The shortcomings of approaches to the development of modern control automation systems and ways of their improvement are given: the correct formation of objects for study and optimization; a joint synthesis of control objects and control systems, an increase in the structural diversity of the elements of control systems. Diagrams of control systems with purposefully variable structure of their elements are presented. Structures of control algorithms for an object with a purposefully variable structure are given.

  17. Quantum Approaches to Logic Circuit Synthesis and Testing

    DTIC Science & Technology

    2006-06-01

    with n qubits using Octave (Oct), MATLAB (MAT), Blitz++ (B++) and QuIDDPro (QP) with Oracle Design 1. 42 Table 4: Simulating Grover’s...algorithm with n qubits using Octave (Oct), MATLAB (MAT), Blitz++ (B++) and QuIDDPro (QP) with Oracle Design 2. 43 Table 5: Number of Grover iterations...to accurately characterize the effects of gate and systematic error in a quantum circuit that generates remotely entangled EPR pairs. An

  18. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  19. An integrated analysis-synthesis array system for spatial sound fields.

    PubMed

    Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao

    2015-03-01

    An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.

  20. The TosMIC approach to 3-(oxazol-5-yl) indoles: application to the synthesis of indole-based IMPDH inhibitors.

    PubMed

    Dhar, T G Murali; Shen, Zhongqi; Fleener, Catherine A; Rouleau, Katherine A; Barrish, Joel C; Hollenbaugh, Diane L; Iwanowicz, Edwin J

    2002-11-18

    A modified approach to the synthesis of 3-(oxazolyl-5-yl) indoles is reported. This method was applied to the synthesis of series of novel indole based inhibitors of inosine monophosphate dehydrogenase (IMPDH). The synthesis and the structure-activity relationships (SARs), derived from in vitro studies, for this new series of inhibitors is given.

Top