Energy minimization on manifolds for docking flexible molecules
Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima
2015-01-01
In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722
Knaup, Petra; Schöpe, Lothar
2012-01-01
The authors see the major potential of systematically processing data from AAL-technology in higher sustainability, higher technology acceptance, higher security, higher robustness, higher flexibility and better integration in existing structures and processes. This potential is currently underachieved and not yet systematically promoted. The authors have written a position paper on potential and necessity of substantial IT research enhancing Ambient Assisted Living (AAL) applications. This paper summarizes the most important challenges in the fields health care, data protection, operation and user interfaces. Research in medical informatics is necessary among others in the fields flexible authorization concept, medical information needs, algorithms to evaluate user profiles and visualization of aggregated data.
Active control of flexible structures using a fuzzy logic algorithm
NASA Astrophysics Data System (ADS)
Cohen, Kelly; Weller, Tanchum; Ben-Asher, Joseph Z.
2002-08-01
This study deals with the development and application of an active control law for the vibration suppression of beam-like flexible structures experiencing transient disturbances. Collocated pairs of sensors/actuators provide active control of the structure. A design methodology for the closed-loop control algorithm based on fuzzy logic is proposed. First, the behavior of the open-loop system is observed. Then, the number and locations of collocated actuator/sensor pairs are selected. The proposed control law, which is based on the principles of passivity, commands the actuator to emulate the behavior of a dynamic vibration absorber. The absorber is tuned to a targeted frequency, whereas the damping coefficient of the dashpot is varied in a closed loop using a fuzzy logic based algorithm. This approach not only ensures inherent stability associated with passive absorbers, but also circumvents the phenomenon of modal spillover. The developed controller is applied to the AFWAL/FIB 10 bar truss. Simulated results using MATLAB© show that the closed-loop system exhibits fairly quick settling times and desirable performance, as well as robustness characteristics. To demonstrate the robustness of the control system to changes in the temporal dynamics of the flexible structure, the transient response to a considerably perturbed plant is simulated. The modal frequencies of the 10 bar truss were raised as well as lowered substantially, thereby significantly perturbing the natural frequencies of vibration. For these cases, too, the developed control law provides adequate settling times and rates of vibrational energy dissipation.
Scanning of wind turbine upwind conditions: numerical algorithm and first applications
NASA Astrophysics Data System (ADS)
Calaf, Marc; Cortina, Gerard; Sharma, Varun; Parlange, Marc B.
2014-11-01
Wind turbines still obtain in-situ meteorological information by means of traditional wind vane and cup anemometers installed at the turbine's nacelle, right behind the blades. This has two important drawbacks: 1-turbine misalignment with the mean wind direction is common and energy losses are experienced; 2-the near-blade monitoring does not provide any time to readjust the profile of the wind turbine to incoming turbulence gusts. A solution is to install wind Lidar devices on the turbine's nacelle. This technique is currently under development as an alternative to traditional in-situ wind anemometry because it can measure the wind vector at substantial distances upwind. However, at what upwind distance should they interrogate the atmosphere? A new flexible wind turbine algorithm for large eddy simulations of wind farms that allows answering this question, will be presented. The new wind turbine algorithm timely corrects the turbines' yaw misalignment with the changing wind. The upwind scanning flexibility of the algorithm also allows to track the wind vector and turbulent kinetic energy as they approach the wind turbine's rotor blades. Results will illustrate the spatiotemporal evolution of the wind vector and the turbulent kinetic energy as the incoming flow approaches the wind turbine under different atmospheric stability conditions. Results will also show that the available atmospheric wind power is larger during daytime periods at the cost of an increased variance.
Recursive dynamics for flexible multibody systems using spatial operators
NASA Technical Reports Server (NTRS)
Jain, A.; Rodriguez, G.
1990-01-01
Due to their structural flexibility, spacecraft and space manipulators are multibody systems with complex dynamics and possess a large number of degrees of freedom. Here the spatial operator algebra methodology is used to develop a new dynamics formulation and spatially recursive algorithms for such flexible multibody systems. A key feature of the formulation is that the operator description of the flexible system dynamics is identical in form to the corresponding operator description of the dynamics of rigid multibody systems. A significant advantage of this unifying approach is that it allows ideas and techniques for rigid multibody systems to be easily applied to flexible multibody systems. The algorithms use standard finite-element and assumed modes models for the individual body deformation. A Newton-Euler Operator Factorization of the mass matrix of the multibody system is first developed. It forms the basis for recursive algorithms such as for the inverse dynamics, the computation of the mass matrix, and the composite body forward dynamics for the system. Subsequently, an alternative Innovations Operator Factorization of the mass matrix, each of whose factors is invertible, is developed. It leads to an operator expression for the inverse of the mass matrix, and forms the basis for the recursive articulated body forward dynamics algorithm for the flexible multibody system. For simplicity, most of the development here focuses on serial chain multibody systems. However, extensions of the algorithms to general topology flexible multibody systems are described. While the computational cost of the algorithms depends on factors such as the topology and the amount of flexibility in the multibody system, in general, it appears that in contrast to the rigid multibody case, the articulated body forward dynamics algorithm is the more efficient algorithm for flexible multibody systems containing even a small number of flexible bodies. The variety of algorithms described here permits a user to choose the algorithm which is optimal for the multibody system at hand. The availability of a number of algorithms is even more important for real-time applications, where implementation on parallel processors or custom computing hardware is often necessary to maximize speed.
Adaptive Control Strategies for Flexible Robotic Arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1996-01-01
The control problem of a flexible robotic arm has been investigated. The control strategies that have been developed have a wide application in approaching the general control problem of flexible space structures. The following control strategies have been developed and evaluated: neural self-tuning control algorithm, neural-network-based fuzzy logic control algorithm, and adaptive pole assignment algorithm. All of the above algorithms have been tested through computer simulation. In addition, the hardware implementation of a computer control system that controls the tip position of a flexible arm clamped on a rigid hub mounted directly on the vertical shaft of a dc motor, has been developed. An adaptive pole assignment algorithm has been applied to suppress vibrations of the described physical model of flexible robotic arm and has been successfully tested using this testbed.
Contextual classification on a CDC Flexible Processor system. [for photomapped remote sensing data
NASA Technical Reports Server (NTRS)
Smith, B. W.; Siegel, H. J.; Swain, P. H.
1981-01-01
A potential hardware organization for the Flexible Processor Array is presented. An algorithm that implements a contextual classifier for remote sensing data analysis is given, along with uniprocessor classification algorithms. The Flexible Processor algorithm is provided, as are simulated timings for contextual classifiers run on the Flexible Processor Array and another system. The timings are analyzed for context neighborhoods of sizes three and nine.
Brian hears: online auditory processing using vectorization over channels.
Fontaine, Bertrand; Goodman, Dan F M; Benichoux, Victor; Brette, Romain
2011-01-01
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.
NASA Astrophysics Data System (ADS)
Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin
2017-08-01
Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.
A Comparative Study of Optimization Algorithms for Engineering Synthesis.
1983-03-01
the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first
Path connectivity based spectral defragmentation in flexible bandwidth networks.
Wang, Ying; Zhang, Jie; Zhao, Yongli; Zhang, Jiawei; Zhao, Jie; Wang, Xinbo; Gu, Wanyi
2013-01-28
Optical networks with flexible bandwidth provisioning have become a very promising networking architecture. It enables efficient resource utilization and supports heterogeneous bandwidth demands. In this paper, two novel spectrum defragmentation approaches, i.e. Maximum Path Connectivity (MPC) algorithm and Path Connectivity Triggering (PCT) algorithm, are proposed based on the notion of Path Connectivity, which is defined to represent the maximum variation of node switching ability along the path in flexible bandwidth networks. A cost-performance-ratio based profitability model is given to denote the prons and cons of spectrum defragmentation. We compare these two proposed algorithms with non-defragmentation algorithm in terms of blocking probability. Then we analyze the differences of defragmentation profitability between MPC and PCT algorithms.
A comparison of force control algorithms for robots in contact with flexible environments
NASA Technical Reports Server (NTRS)
Wilfinger, Lee S.
1992-01-01
In order to perform useful tasks, the robot end-effector must come into contact with its environment. For such tasks, force feedback is frequently used to control the interaction forces. Control of these forces is complicated by the fact that the flexibility of the environment affects the stability of the force control algorithm. Because of the wide variety of different materials present in everyday environments, it is necessary to gain an understanding of how environmental flexibility affects the stability of force control algorithms. This report presents the theory and experimental results of two force control algorithms: Position Accommodation Control and Direct Force Servoing. The implementation of each of these algorithms on a two-arm robotic test bed located in the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) is discussed in detail. The behavior of each algorithm when contacting materials of different flexibility is experimentally determined. In addition, several robustness improvements to the Direct Force Servoing algorithm are suggested and experimentally verified. Finally, a qualitative comparison of the force control algorithms is provided, along with a description of a general tuning process for each control method.
Fast and flexible 3D object recognition solutions for machine vision applications
NASA Astrophysics Data System (ADS)
Effenberger, Ira; Kühnle, Jens; Verl, Alexander
2013-03-01
In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.
Ogorzalek, Tadeusz L; Hura, Greg L; Belsom, Adam; Burnett, Kathryn H; Kryshtafovych, Andriy; Tainer, John A; Rappsilber, Juri; Tsutakawa, Susan E; Fidelis, Krzysztof
2018-03-01
Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. © 2018 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek
2009-01-01
This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.
Recursive flexible multibody system dynamics using spatial operators
NASA Technical Reports Server (NTRS)
Jain, A.; Rodriguez, G.
1992-01-01
This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.
Brian Hears: Online Auditory Processing Using Vectorization Over Channels
Fontaine, Bertrand; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain
2011-01-01
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in “Brian Hears,” a library for the spiking neural network simulator package “Brian.” This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations. PMID:21811453
Exploratory High-Fidelity Aerostructural Optimization Using an Efficient Monolithic Solution Method
NASA Astrophysics Data System (ADS)
Zhang, Jenmy Zimi
This thesis is motivated by the desire to discover fuel efficient aircraft concepts through exploratory design. An optimization methodology based on tightly integrated high-fidelity aerostructural analysis is proposed, which has the flexibility, robustness, and efficiency to contribute to this goal. The present aerostructural optimization methodology uses an integrated geometry parameterization and mesh movement strategy, which was initially proposed for aerodynamic shape optimization. This integrated approach provides the optimizer with a large amount of geometric freedom for conducting exploratory design, while allowing for efficient and robust mesh movement in the presence of substantial shape changes. In extending this approach to aerostructural optimization, this thesis has addressed a number of important challenges. A structural mesh deformation strategy has been introduced to translate consistently the shape changes described by the geometry parameterization to the structural model. A three-field formulation of the discrete steady aerostructural residual couples the mesh movement equations with the three-dimensional Euler equations and a linear structural analysis. Gradients needed for optimization are computed with a three-field coupled adjoint approach. A number of investigations have been conducted to demonstrate the suitability and accuracy of the present methodology for use in aerostructural optimization involving substantial shape changes. Robustness and efficiency in the coupled solution algorithms is crucial to the success of an exploratory optimization. This thesis therefore also focuses on the design of an effective monolithic solution algorithm for the proposed methodology. This involves using a Newton-Krylov method for the aerostructural analysis and a preconditioned Krylov subspace method for the coupled adjoint solution. Several aspects of the monolithic solution method have been investigated. These include appropriate strategies for scaling and matrix-vector product evaluation, as well as block preconditioning techniques that preserve the modularity between subproblems. The monolithic solution method is applied to problems with varying degrees of fluid-structural coupling, as well as a wing span optimization study. The monolithic solution algorithm typically requires 20%-70% less computing time than its partitioned counterpart. This advantage increases with increasing wing flexibility. The performance of the monolithic solution method is also much less sensitive to the choice of the solution parameter.
NASA Astrophysics Data System (ADS)
Hanson, Jeffrey A.; McLaughlin, Keith L.; Sereno, Thomas J.
2011-06-01
We have developed a flexible, target-driven, multi-modal, physics-based fusion architecture that efficiently searches sensor detections for targets and rejects clutter while controlling the combinatoric problems that commonly arise in datadriven fusion systems. The informational constraints imposed by long lifetime requirements make systems vulnerable to false alarms. We demonstrate that our data fusion system significantly reduces false alarms while maintaining high sensitivity to threats. In addition, mission goals can vary substantially in terms of targets-of-interest, required characterization, acceptable latency, and false alarm rates. Our fusion architecture provides the flexibility to match these trade-offs with mission requirements unlike many conventional systems that require significant modifications for each new mission. We illustrate our data fusion performance with case studies that span many of the potential mission scenarios including border surveillance, base security, and infrastructure protection. In these studies, we deployed multi-modal sensor nodes - including geophones, magnetometers, accelerometers and PIR sensors - with low-power processing algorithms and low-bandwidth wireless mesh networking to create networks capable of multi-year operation. The results show our data fusion architecture maintains high sensitivities while suppressing most false alarms for a variety of environments and targets.
Alpert, Abby; Morganti, Kristy G; Margolis, Gregg S; Wasserman, Jeffrey; Kellermann, Arthur L
2013-12-01
Some Medicare beneficiaries who place 911 calls to request an ambulance might safely be cared for in settings other than the emergency department (ED) at lower cost. Using 2005-09 Medicare claims data and a validated algorithm, we estimated that 12.9-16.2 percent of Medicare-covered 911 emergency medical services (EMS) transports involved conditions that were probably nonemergent or primary care treatable. Among beneficiaries not admitted to the hospital, about 34.5 percent had a low-acuity diagnosis that might have been managed outside the ED. Annual Medicare EMS and ED payments for these patients were approximately $1 billion per year. If Medicare had the flexibility to reimburse EMS for managing selected 911 calls in ways other than transport to an ED, we estimate that the federal government could save $283-$560 million or more per year, while improving the continuity of patient care. If private insurance companies followed suit, overall societal savings could be twice as large.
A decoupled recursive approach for constrained flexible multibody system dynamics
NASA Technical Reports Server (NTRS)
Lai, Hao-Jan; Kim, Sung-Soo; Haug, Edward J.; Bae, Dae-Sung
1989-01-01
A variational-vector calculus approach is employed to derive a recursive formulation for dynamic analysis of flexible multibody systems. Kinematic relationships for adjacent flexible bodies are derived in a companion paper, using a state vector notation that represents translational and rotational components simultaneously. Cartesian generalized coordinates are assigned for all body and joint reference frames, to explicitly formulate deformation kinematics under small deformation kinematics and an efficient flexible dynamics recursive algorithm is developed. Dynamic analysis of a closed loop robot is performed to illustrate efficiency of the algorithm.
A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times
NASA Astrophysics Data System (ADS)
Li, Xin; Fung, Richard Y. K.
2018-02-01
This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.
Flexible multiply towpreg and method of production therefor
NASA Technical Reports Server (NTRS)
Muzzy, John D. (Inventor); Varughese, Babu (Inventor)
1992-01-01
This invention relates to an improved flexible towpreg and a method of production therefor. The improved flexible towpreg comprises a plurality of towpreg plies which comprise reinforcing filaments and matrix forming material; the reinforcing filaments being substantially wetout by the matrix forming material such that the towpreg plies are substantially void-free composite articles, and the towpreg plies having an average thickness less than about 100 microns. The method of production for the improved flexible towpreg comprises the steps of spreading the reinforcing filaments to expose individually substantially all of the reinforcing filaments; coating the reinforcing filaments with the matrix forming material in a manner causing interfacial adhesion of the matrix forming material to the reinforcing filaments; forming the towpreg plies by heating the matrix forming material contacting the reinforcing filaments until the matrix forming material liquefies and coats the reinforcing filaments; and cooling the towpreg plies in a manner such that substantial cohesion between neighboring towpreg plies is prevented until the matrix forming material solidifies.
NASA Technical Reports Server (NTRS)
Muzzy, John D. (Inventor); Varughese, Babu (Inventor)
1992-01-01
This invention relates to an improved flexible towpreg and a method of production therefor. The improved flexible towpreg comprises a plurality of towpreg plies which comprise reinforcing filaments and matrix forming material; the reinforcing filaments being substantially wetout by the matrix forming material such that the towpreg plies are substantially void-free composite articles, and the towpreg plies having an average thickness less than about 100 microns. The method of production for the improved flexible towpreg comprises the steps of spreading the reinforcing filaments to expose individually substantially all of the reinforcing filaments; coating the reinforcing filaments with the matrix forming material in a manner causing interfacial adhesion of the matrix forming material to the reinforcing filaments; forming the towpreg plies by heating the matrix forming material contacting the reinforcing filaments until the matrix forming material liquifies and coats the reinforcing filaments; and cooling the towpreg plies in a manner such that substantial cohesion between neighboring towpreg plies is prevented until the matrix forming material solidifies.
FPGA implementation of sparse matrix algorithm for information retrieval
NASA Astrophysics Data System (ADS)
Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio
2005-06-01
Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.
Efficient hiding of confidential high-utility itemsets with minimal side effects
NASA Astrophysics Data System (ADS)
Lin, Jerry Chun-Wei; Hong, Tzung-Pei; Fournier-Viger, Philippe; Liu, Qiankun; Wong, Jia-Wei; Zhan, Justin
2017-11-01
Privacy preserving data mining (PPDM) is an emerging research problem that has become critical in the last decades. PPDM consists of hiding sensitive information to ensure that it cannot be discovered by data mining algorithms. Several PPDM algorithms have been developed. Most of them are designed for hiding sensitive frequent itemsets or association rules. Hiding sensitive information in a database can have several side effects such as hiding other non-sensitive information and introducing redundant information. Finding the set of itemsets or transactions to be sanitised that minimises side effects is an NP-hard problem. In this paper, a genetic algorithm (GA) using transaction deletion is designed to hide sensitive high-utility itemsets for PPUM. A flexible fitness function with three adjustable weights is used to evaluate the goodness of each chromosome for hiding sensitive high-utility itemsets. To speed up the evolution process, the pre-large concept is adopted in the designed algorithm. It reduces the number of database scans required for verifying the goodness of an evaluated chromosome. Substantial experiments are conducted to compare the performance of the designed GA approach (with/without the pre-large concept), with a GA-based approach relying on transaction insertion and a non-evolutionary algorithm, in terms of execution time, side effects, database integrity and utility integrity. Results demonstrate that the proposed algorithm hides sensitive high-utility itemsets with fewer side effects than previous studies, while preserving high database and utility integrity.
Algorithm For Solution Of Subset-Regression Problems
NASA Technical Reports Server (NTRS)
Verhaegen, Michel
1991-01-01
Reliable and flexible algorithm for solution of subset-regression problem performs QR decomposition with new column-pivoting strategy, enables selection of subset directly from originally defined regression parameters. This feature, in combination with number of extensions, makes algorithm very flexible for use in analysis of subset-regression problems in which parameters have physical meanings. Also extended to enable joint processing of columns contaminated by noise with those free of noise, without using scaling techniques.
Dereplication, Aggregation and Scoring Tool (DAS Tool) v1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
SIEBER, CHRISTIAN
Communities of uncultivated microbes are critical to ecosystem function and microorganism health, and a key objective of metagenomic studies is to analyze organism-specific metabolic pathways and reconstruct community interaction networks. This requires accurate assignment of genes to genomes, yet existing binning methods often fail to predict a reasonable number of genomes and report many bins of low quality and completeness. Furthermore, the performance of existing algorithms varies between samples and biotypes. Here, we present a dereplication, aggregation and scoring strategy, DAS Tool, that combines the strengths of a flexible set of established binning algorithms. DAS Tools applied to a constructedmore » community generated more accurate bins than any automated method. Further, when applied to samples of different complexity, including soil, natural oil seeps, and the human gut, DAS Tool recovered substantially more near-complete genomes than any single binning method alone. Included were three genomes from a novel lineage . The ability to reconstruct many near-complete genomes from metagenomics data will greatly advance genome-centric analyses of ecosystems.« less
NASA Technical Reports Server (NTRS)
Nechyba, Michael C.; Ettinger, Scott M.; Ifju, Peter G.; Wazak, Martin
2002-01-01
Recently substantial progress has been made towards design building and testifying remotely piloted Micro Air Vehicles (MAVs). This progress in overcoming the aerodynamic obstacles to flight at very small scales has, unfortunately, not been matched by similar progress in autonomous MAV flight. Thus, we propose a robust, vision-based horizon detection algorithm as the first step towards autonomous MAVs. In this paper, we first motivate the use of computer vision for the horizon detection task by examining the flight of birds (biological MAVs) and considering other practical factors. We then describe our vision-based horizon detection algorithm, which has been demonstrated at 30 Hz with over 99.9% correct horizon identification, over terrain that includes roads, buildings large and small, meadows, wooded areas, and a lake. We conclude with some sample horizon detection results and preview a companion paper, where the work discussed here forms the core of a complete autonomous flight stability system.
Flexible ligand docking using a genetic algorithm
NASA Astrophysics Data System (ADS)
Oshiro, C. M.; Kuntz, I. D.; Dixon, J. Scott
1995-04-01
Two computational techniques have been developed to explore the orientational and conformational space of a flexible ligand within an enzyme. Both methods use the Genetic Algorithm (GA) to generate conformationally flexible ligands in conjunction with algorithms from the DOCK suite of programs to characterize the receptor site. The methods are applied to three enzyme-ligand complexes: dihydrofolate reductase-methotrexate, thymidylate synthase-phenolpthalein and HIV protease-thioketal haloperidol. Conformations and orientations close to the crystallographically determined structures are obtained, as well as alternative structures with low energy. The potential for the GA method to screen a database of compounds is also examined. A collection of ligands is evaluated simultaneously, rather than docking the ligands individually into the enzyme.
Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; ...
2016-03-17
Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spatial operator approach to flexible multibody system dynamics and control
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1991-01-01
The inverse and forward dynamics problems for flexible multibody systems were solved using the techniques of spatially recursive Kalman filtering and smoothing. These algorithms are easily developed using a set of identities associated with mass matrix factorization and inversion. These identities are easily derived using the spatial operator algebra developed by the author. Current work is aimed at computational experiments with the described algorithms and at modelling for control design of limber manipulator systems. It is also aimed at handling and manipulation of flexible objects.
Dynamic Appliances Scheduling in Collaborative MicroGrids System
Bilil, Hasnae; Aniba, Ghassane; Gharavi, Hamid
2017-01-01
In this paper a new approach which is based on a collaborative system of MicroGrids (MG’s), is proposed to enable household appliance scheduling. To achieve this, appliances are categorized into flexible and non-flexible Deferrable Loads (DL’s), according to their electrical components. We propose a dynamic scheduling algorithm where users can systematically manage the operation of their electric appliances. The main challenge is to develop a flattening function calculus (reshaping) for both flexible and non-flexible DL’s. In addition, implementation of the proposed algorithm would require dynamically analyzing two successive multi-objective optimization (MOO) problems. The first targets the activation schedule of non-flexible DL’s and the second deals with the power profiles of flexible DL’s. The MOO problems are resolved by using a fast and elitist multi-objective genetic algorithm (NSGA-II). Finally, in order to show the efficiency of the proposed approach, a case study of a collaborative system that consists of 40 MG’s registered in the load curve for the flattening program has been developed. The results verify that the load curve can indeed become very flat by applying the proposed scheduling approach. PMID:28824226
NASA Astrophysics Data System (ADS)
Zhao, Jijun; Zhang, Nawa; Ren, Danping; Hu, Jinhua
2017-12-01
The recently proposed flexible optical network can provide more efficient accommodation of multiple data rates than the current wavelength-routed optical networks. Meanwhile, the energy efficiency has also been a hot topic because of the serious energy consumption problem. In this paper, the energy efficiency problem of flexible optical networks with physical-layer impairments constraint is studied. We propose a combined impairment-aware and energy-efficient routing and spectrum assignment (RSA) algorithm based on the link availability, in which the impact of power consumption minimization on signal quality is considered. By applying the proposed algorithm, the connection requests are established on a subset of network topology, reducing the number of transitions from sleep to active state. The simulation results demonstrate that our proposed algorithm can improve the energy efficiency and spectrum resources utilization with the acceptable blocking probability and average delay.
Launch flexibility using NLP guidance and remote wind sensing
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
This paper examines the use of lidar wind measurements in the implementation of a guidance strategy for a nonlinear programming (NLP) launch guidance algorithm. The NLP algorithm uses B-spline command function representation for flexibility in the design of the guidance steering commands. Using this algorithm, the guidance system solves a two-point boundary value problem at each guidance update. The specification of different boundary value problems at each guidance update provides flexibility that can be used in the design of the guidance strategy. The algorithm can use lidar wind measurements for on pad guidance retargeting and for load limiting guidance steering commands. Examples presented in the paper use simulated wind updates to correct wind induced final orbit errors and to adjust the guidance steering commands to limit the product of the dynamic pressure and angle-of-attack for launch vehicle load alleviation.
Towards automated visual flexible endoscope navigation.
van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J
2013-10-01
The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.
Transform methods for precision continuum and control models of flexible space structures
NASA Technical Reports Server (NTRS)
Lupi, Victor D.; Turner, James D.; Chun, Hon M.
1991-01-01
An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.
CT brush and CancerZap!: two video games for computed tomography dose minimization.
Alvare, Graham; Gordon, Richard
2015-05-12
X-ray dose from computed tomography (CT) scanners has become a significant public health concern. All CT scanners spray x-ray photons across a patient, including those using compressive sensing algorithms. New technologies make it possible to aim x-ray beams where they are most needed to form a diagnostic or screening image. We have designed a computer game, CT Brush, that takes advantage of this new flexibility. It uses a standard MART algorithm (Multiplicative Algebraic Reconstruction Technique), but with a user defined dynamically selected subset of the rays. The image appears as the player moves the CT brush over an initially blank scene, with dose accumulating with every "mouse down" move. The goal is to find the "tumor" with as few moves (least dose) as possible. We have successfully implemented CT Brush in Java and made it available publicly, requesting crowdsourced feedback on improving the open source code. With this experience, we also outline a "shoot 'em up game" CancerZap! for photon limited CT. We anticipate that human computing games like these, analyzed by methods similar to those used to understand eye tracking, will lead to new object dependent CT algorithms that will require significantly less dose than object independent nonlinear and compressive sensing algorithms that depend on sprayed photons. Preliminary results suggest substantial dose reduction is achievable.
Cloud computing for comparative genomics
2010-01-01
Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786
Cloud computing for comparative genomics.
Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J
2010-05-18
Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.
Learning multimodal dictionaries.
Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi
2007-09-01
Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.
Fast Transformation of Temporal Plans for Efficient Execution
NASA Technical Reports Server (NTRS)
Tsamardinos, Ioannis; Muscettola, Nicola; Morris, Paul
1998-01-01
Temporal plans permit significant flexibility in specifying the occurrence time of events. Plan execution can make good use of that flexibility. However, the advantage of execution flexibility is counterbalanced by the cost during execution of propagating the time of occurrence of events throughout the flexible plan. To minimize execution latency, this propagation needs to be very efficient. Previous work showed that every temporal plan can be reformulated as a dispatchable plan, i.e., one for which propagation to immediate neighbors is sufficient. A simple algorithm was given that finds a dispatchable plan with a minimum number of edges in cubic time and quadratic space. In this paper, we focus on the efficiency of the reformulation process, and improve on that result. A new algorithm is presented that uses linear space and has time complexity equivalent to Johnson s algorithm for all-pairs shortest-path problems. Experimental evidence confirms the practical effectiveness of the new algorithm. For example, on a large commercial application, the performance is improved by at least two orders of magnitude. We further show that the dispatchable plan, already minimal in the total number of edges, can also be made minimal in the maximum number of edges incoming or outgoing at any node.
NASA Astrophysics Data System (ADS)
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
Vectorized algorithms for spiking neural network simulation.
Brette, Romain; Goodman, Dan F M
2011-06-01
High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.
Erickson, Jon A; Jalaie, Mehran; Robertson, Daniel H; Lewis, Richard A; Vieth, Michal
2004-01-01
The key to success for computational tools used in structure-based drug design is the ability to accurately place or "dock" a ligand in the binding pocket of the target of interest. In this report we examine the effect of several factors on docking accuracy, including ligand and protein flexibility. To examine ligand flexibility in an unbiased fashion, a test set of 41 ligand-protein cocomplex X-ray structures were assembled that represent a diversity of size, flexibility, and polarity with respect to the ligands. Four docking algorithms, DOCK, FlexX, GOLD, and CDOCKER, were applied to the test set, and the results were examined in terms of the ability to reproduce X-ray ligand positions within 2.0A heavy atom root-mean-square deviation. Overall, each method performed well (>50% accuracy) but for all methods it was found that docking accuracy decreased substantially for ligands with eight or more rotatable bonds. Only CDOCKER was able to accurately dock most of those ligands with eight or more rotatable bonds (71% accuracy rate). A second test set of structures was gathered to examine how protein flexibility influences docking accuracy. CDOCKER was applied to X-ray structures of trypsin, thrombin, and HIV-1-protease, using protein structures bound to several ligands and also the unbound (apo) form. Docking experiments of each ligand to one "average" structure and to the apo form were carried out, and the results were compared to docking each ligand back to its originating structure. The results show that docking accuracy falls off dramatically if one uses an average or apo structure. In fact, it is shown that the drop in docking accuracy mirrors the degree to which the protein moves upon ligand binding.
Highly porous ceramic oxide aerogels having improved flexibility
NASA Technical Reports Server (NTRS)
Guo, Haiquan (Inventor); Meador, Mary Ann B. (Inventor); Nguyen, Baochau N. (Inventor)
2012-01-01
Ceramic oxide aerogels having improved flexibility are disclosed. Preferred embodiments exhibit high modulus and other strength properties despite their improved flexibility. The gels may be polymer cross-linked via organic polymer chains to further improve strength properties, without substantially detracting from the improved flexibility. Methods of making such aerogels are also disclosed.
The GOES-R Product Generation Architecture - Post CDR Update
NASA Astrophysics Data System (ADS)
Dittberner, G.; Kalluri, S.; Weiner, A.
2012-12-01
The GOES-R system will substantially improve the accuracy of information available to users by providing data from significantly enhanced instruments, which will generate an increased number and diversity of products with higher resolution, and much shorter relook times. Considerably greater compute and memory resources are necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
The GOES-R Product Generation Architecture
NASA Astrophysics Data System (ADS)
Dittberner, G. J.; Kalluri, S.; Hansen, D.; Weiner, A.; Tarpley, A.; Marley, S.
2011-12-01
The GOES-R system will substantially improve users' ability to succeed in their work by providing data with significantly enhanced instruments, higher resolution, much shorter relook times, and an increased number and diversity of products. The Product Generation architecture is designed to provide the computer and memory resources necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
Vibration suppression in flexible structures via the sliding-mode control approach
NASA Technical Reports Server (NTRS)
Drakunov, S.; Oezguener, Uemit
1994-01-01
Sliding mode control became very popular recently because it makes the closed loop system highly insensitive to external disturbances and parameter variations. Sliding algorithms for flexible structures have been used previously, but these were based on finite-dimensional models. An extension of this approach for differential-difference systems is obtained. That makes if possible to apply sliding-mode control algorithms to the variety of nondispersive flexible structures which can be described as differential-difference systems. The main idea of using this technique for dispersive structures is to reduce the order of the controlled part of the system by applying an integral transformation. We can say that transformation 'absorbs' the dispersive properties of the flexible structure as the controlled part becomes dispersive.
NASA Technical Reports Server (NTRS)
Jain, A.; Man, G. K.
1993-01-01
This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
NASA Technical Reports Server (NTRS)
Ettinger, Scott M.; Nechyba, Michael C.; Ifju, Peter G.; Wazak, Martin
2002-01-01
Substantial progress has been made recently towards design building and test-flying remotely piloted Micro Air Vehicle's (MAVs). We seek to complement this progress in overcoming the aerodynamic obstacles to.flight at very small scales with a vision stability and autonomy system. The developed system based on a robust horizon detection algorithm which we discuss in greater detail in a companion paper. In this paper, we first motivate the use of computer vision for MAV autonomy arguing that given current sensor technology, vision may he the only practical approach to the problem. We then briefly review our statistical vision-based horizon detection algorithm, which has been demonstrated at 30Hz with over 99.9% correct horizon identification. Next we develop robust schemes for the detection of extreme MAV attitudes, where no horizon is visible, and for the detection of horizon estimation errors, due to external factors such as video transmission noise. Finally, we discuss our feed-back controller for self-stabilized flight, and report results on vision autonomous flights of duration exceeding ten minutes.
Systems-on-chip approach for real-time simulation of wheel-rail contact laws
NASA Astrophysics Data System (ADS)
Mei, T. X.; Zhou, Y. J.
2013-04-01
This paper presents the development of a systems-on-chip approach to speed up the simulation of wheel-rail contact laws, which can be used to reduce the requirement for high-performance computers and enable simulation in real time for the use of hardware-in-loop for experimental studies of the latest vehicle dynamic and control technologies. The wheel-rail contact laws are implemented using a field programmable gate array (FPGA) device with a design that substantially outperforms modern general-purpose PC platforms or fixed architecture digital signal processor devices in terms of processing time, configuration flexibility and cost. In order to utilise the FPGA's parallel-processing capability, the operations in the contact laws algorithms are arranged in a parallel manner and multi-contact patches are tackled simultaneously in the design. The interface between the FPGA device and the host PC is achieved by using a high-throughput and low-latency Ethernet link. The development is based on FASTSIM algorithms, although the design can be adapted and expanded for even more computationally demanding tasks.
Apparatus for integrating a rigid structure into a flexible wall of an inflatable structure
NASA Technical Reports Server (NTRS)
Johnson, Christopher J. (Inventor); Patterson, Ross M. (Inventor); Spexarth, Gary R. (Inventor)
2009-01-01
For an inflatable structure having a flexible outer shell or wall structure having a flexible restraint layer comprising interwoven, load-bearing straps, apparatus for integrating one or more substantially rigid members into the flexible shell. For each rigid member, a corresponding opening is formed through the flexible shell for receiving the rigid member. A plurality of connection devices are mounted on the rigid member for receiving respective ones of the load-bearing straps. In one embodiment, the connection devices comprise inner connecting mechanisms and outer connecting mechanisms, the inner and outer connecting mechanisms being mounted on the substantially rigid structure and spaced along a peripheral edge portion of the structure in an interleafed array in which respective outer connecting mechanisms are interposed between adjacent pairs of inner connecting mechanisms, the outer connecting mechanisms projecting outwardly from the peripheral edge portion of the substantially rigid structure beyond the adjacent inner connecting mechanisms to form a staggered array of connecting mechanisms extending along the panel structure edge portion. In one embodiment, the inner and outer connecting mechanisms form part of an integrated, structure rotatably mounted on the rigid member peripheral edge portion.
Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker
2017-01-01
In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.
On the reliable and flexible solution of practical subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm for solving subset regression problems is described. The algorithm performs a QR decomposition with a new column-pivoting strategy, which permits subset selection directly from the originally defined regression parameters. This, in combination with a number of extensions of the new technique, makes the method a very flexible tool for analyzing subset regression problems in which the parameters have a physical meaning.
NASA Astrophysics Data System (ADS)
Qiu, Zhi-cheng; Shi, Ming-li; Wang, Bin; Xie, Zhuo-wei
2012-05-01
A rod cylinder based pneumatic driving scheme is proposed to suppress the vibration of a flexible smart beam. Pulse code modulation (PCM) method is employed to control the motion of the cylinder's piston rod for simultaneous positioning and vibration suppression. Firstly, the system dynamics model is derived using Hamilton principle. Its standard state-space representation is obtained for characteristic analysis, controller design, and simulation. Secondly, a genetic algorithm (GA) is applied to optimize and tune the control gain parameters adaptively based on the specific performance index. Numerical simulations are performed on the pneumatic driving elastic beam system, using the established model and controller with tuned gains by GA optimization process. Finally, an experimental setup for the flexible beam driven by a pneumatic rod cylinder is constructed. Experiments for suppressing vibrations of the flexible beam are conducted. Theoretical analysis, numerical simulation and experimental results demonstrate that the proposed pneumatic drive scheme and the adopted control algorithms are feasible. The large amplitude vibration of the first bending mode can be suppressed effectively.
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen
1990-01-01
One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.
Distance Travelled: Outcomes and Evidence in Flexible Learning Options
ERIC Educational Resources Information Center
Thomas, Joseph; McGinty, Sue; te Riele, Kitty; Wilson, Kimberley
2017-01-01
Flexible learning options (FLOs) provide individualised learning pathways for disengaged young people with strong emphasis on inclusivity and wellbeing support. Amidst a rapid expansion of Australia's flexible learning sector, service providers are under increasing pressure to substantiate participant outcomes. This paper stems from a national…
Core disruptive accident margin seal
Garin, John
1978-01-01
An apparatus for sealing the annulus defined between a substantially cylindrical rotatable first riser assembly and plug combination disposed in a substantially cylindrical second riser assembly and plug combination of a nuclear reactor system. The apparatus comprises a flexible metal member having a first side attached to one of the riser components and a second side extending toward the other riser component and an actuating mechanism attached to the flexible metal member while extending to an accessible location. When the actuating mechanism is not activated, the flexible metal member does not contact the other riser component thus allowing the free rotation of the riser assembly and plug combination. When desired, the actuating mechanism causes the second side of the flexible metal member to contact the other riser component thereby sealing the annulus between the components.
Torsional anharmonicity in the conformational thermodynamics of flexible molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F., III; Clary, David C.
We present an algorithm for calculating the conformational thermodynamics of large, flexible molecules that combines ab initio electronic structure theory calculations with a torsional path integral Monte Carlo (TPIMC) simulation. The new algorithm overcomes the previous limitations of the TPIMC method by including the thermodynamic contributions of non-torsional vibrational modes and by affordably incorporating the ab initio calculation of conformer electronic energies, and it improves the conventional ab initio treatment of conformational thermodynamics by accounting for the anharmonicity of the torsional modes. Using previously published ab initio results and new TPIMC calculations, we apply the algorithm to the conformers of the adrenaline molecule.
Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers
NASA Technical Reports Server (NTRS)
Lind, Rick; Balas, Gary J.
1995-01-01
This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.
Basic Research in Digital Stochastic Model Algorithmic Control.
1980-11-01
IDCOM Description 115 8.2 Basic Control Computation 117 8.3 Gradient Algorithm 119 8.4 Simulation Model 119 8.5 Model Modifications 123 8.6 Summary 124...constraints, and 3) control traJectorv comouta- tion. 2.1.1 Internal Model of the System The multivariable system to be controlled is represented by a...more flexible and adaptive, since the model , criteria, and sampling rates can be adjusted on-line. This flexibility comes from the use of the impulse
Space station dynamic modeling, disturbance accommodation, and adaptive control
NASA Technical Reports Server (NTRS)
Wang, S. J.; Ih, C. H.; Lin, Y. H.; Metter, E.
1985-01-01
Dynamic models for two space station configurations were derived. Space shuttle docking disturbances and their effects on the station and solar panels are quantified. It is shown that hard shuttle docking can cause solar panel buckling. Soft docking and berthing can substantially reduce structural loads at the expense of large shuttle and station attitude excursions. It is found predocking shuttle momentum reduction is necessary to achieve safe and routine operations. A direct model reference adaptive control is synthesized and evaluated for the station model parameter errors and plant dynamics truncations. The rigid body and the flexible modes are treated. It is shown that convergence of the adaptive algorithm can be achieved in 100 seconds with reasonable performance even during shuttle hard docking operations in which station mass and inertia are instantaneously changed by more than 100%.
Model-based gene set analysis for Bioconductor.
Bauer, Sebastian; Robinson, Peter N; Gagneur, Julien
2011-07-01
Gene Ontology and other forms of gene-category analysis play a major role in the evaluation of high-throughput experiments in molecular biology. Single-category enrichment analysis procedures such as Fisher's exact test tend to flag large numbers of redundant categories as significant, which can complicate interpretation. We have recently developed an approach called model-based gene set analysis (MGSA), that substantially reduces the number of redundant categories returned by the gene-category analysis. In this work, we present the Bioconductor package mgsa, which makes the MGSA algorithm available to users of the R language. Our package provides a simple and flexible application programming interface for applying the approach. The mgsa package has been made available as part of Bioconductor 2.8. It is released under the conditions of the Artistic license 2.0. peter.robinson@charite.de; julien.gagneur@embl.de.
Wognum, S; Bondar, L; Zolnay, A G; Chai, X; Hulshof, M C C M; Hoogeman, M S; Bel, A
2013-02-01
Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined for the weighted S-TPS-RPM. The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.
NASA Astrophysics Data System (ADS)
TayyebTaher, M.; Esmaeilzadeh, S. Majid
2017-07-01
This article presents an application of Model Predictive Controller (MPC) to the attitude control of a geostationary flexible satellite. SIMO model has been used for the geostationary satellite, using the Lagrange equations. Flexibility is also included in the modelling equations. The state space equations are expressed in order to simplify the controller. Naturally there is no specific tuning rule to find the best parameters of an MPC controller which fits the desired controller. Being an intelligence method for optimizing problem, Genetic Algorithm has been used for optimizing the performance of MPC controller by tuning the controller parameter due to minimum rise time, settling time, overshoot of the target point of the flexible structure and its mode shape amplitudes to make large attitude maneuvers possible. The model included geosynchronous orbit environment and geostationary satellite parameters. The simulation results of the flexible satellite with attitude maneuver shows the efficiency of proposed optimization method in comparison with LQR optimal controller.
Optimization of the double dosimetry algorithm for interventional cardiologists
NASA Astrophysics Data System (ADS)
Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena
2014-11-01
A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.
Mechanically flexible organic electroluminescent device with directional light emission
Duggal, Anil Raj; Shiang, Joseph John; Schaepkens, Marc
2005-05-10
A mechanically flexible and environmentally stable organic electroluminescent ("EL") device with directional light emission comprises an organic EL member disposed on a flexible substrate, a surface of which is coated with a multilayer barrier coating which includes at least one sublayer of a substantially transparent organic polymer and at least one sublayer of a substantially transparent inorganic material. The device includes a reflective metal layer disposed on the organic EL member opposite to the substrate. The reflective metal layer provides an increased external quantum efficiency of the device. The reflective metal layer and the multilayer barrier coating form a seal around the organic EL member to reduce the degradation of the device due to environmental elements.
Development and Evaluation of an Order-N Formulation for Multi-Flexible Body Space Systems
NASA Technical Reports Server (NTRS)
Ghosh, Tushar K.; Quiocho, Leslie J.
2013-01-01
This paper presents development of a generic recursive Order-N algorithm for systems with rigid and flexible bodies, in tree or closed-loop topology, with N being the number of bodies of the system. Simulation results are presented for several test cases to verify and evaluate the performance of the code compared to an existing efficient dense mass matrix-based code. The comparison brought out situations where Order-N or mass matrix-based algorithms could be useful.
A sequential quadratic programming algorithm using an incomplete solution of the subproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, W.; Prieto, F.J.
1993-05-01
We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less
Computing Bounds on Resource Levels for Flexible Plans
NASA Technical Reports Server (NTRS)
Muscvettola, Nicola; Rijsman, David
2009-01-01
A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-26
..., of reducing costs, of harmonizing rules, and of promoting flexibility. This is a significant.... Regulatory Flexibility Act DoD, GSA, and NASA do not expect this interim rule to have a significant economic impact on a substantial number of small entities within the meaning of the Regulatory Flexibility Act, 5...
OSPREY: protein design with ensembles, flexibility, and provable algorithms.
Gainza, Pablo; Roberts, Kyle E; Georgiev, Ivelin; Lilien, Ryan H; Keedy, Daniel A; Chen, Cheng-Yu; Reza, Faisal; Anderson, Amy C; Richardson, David C; Richardson, Jane S; Donald, Bruce R
2013-01-01
We have developed a suite of protein redesign algorithms that improves realistic in silico modeling of proteins. These algorithms are based on three characteristics that make them unique: (1) improved flexibility of the protein backbone, protein side-chains, and ligand to accurately capture the conformational changes that are induced by mutations to the protein sequence; (2) modeling of proteins and ligands as ensembles of low-energy structures to better approximate binding affinity; and (3) a globally optimal protein design search, guaranteeing that the computational predictions are optimal with respect to the input model. Here, we illustrate the importance of these three characteristics. We then describe OSPREY, a protein redesign suite that implements our protein design algorithms. OSPREY has been used prospectively, with experimental validation, in several biomedically relevant settings. We show in detail how OSPREY has been used to predict resistance mutations and explain why improved flexibility, ensembles, and provability are essential for this application. OSPREY is free and open source under a Lesser GPL license. The latest version is OSPREY 2.0. The program, user manual, and source code are available at www.cs.duke.edu/donaldlab/software.php. osprey@cs.duke.edu. Copyright © 2013 Elsevier Inc. All rights reserved.
Visual Tracking via Sparse and Local Linear Coding.
Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan
2015-11-01
The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.
Toward improved peptide feature detection in quantitative proteomics using stable isotope labeling.
Nilse, Lars; Sigloch, Florian Christoph; Biniossek, Martin L; Schilling, Oliver
2015-08-01
Reliable detection of peptides in LC-MS data is a key algorithmic step in the analysis of quantitative proteomics experiments. While highly abundant peptides can be detected reliably by most modern software tools, there is much less agreement on medium and low-intensity peptides in a sample. The choice of software tools can have a big impact on the quantification of proteins, especially for proteins that appear in lower concentrations. However, in many experiments, it is precisely this region of less abundant but substantially regulated proteins that holds the biggest potential for discoveries. This is particularly true for discovery proteomics in the pharmacological sector with a specific interest in key regulatory proteins. In this viewpoint article, we discuss how the development of novel software algorithms allows us to study this region of the proteome with increased confidence. Reliable results are one of many aspects to be considered when deciding on a bioinformatics software platform. Deployment into existing IT infrastructures, compatibility with other software packages, scalability, automation, flexibility, and support need to be considered and are briefly addressed in this viewpoint article. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An informatics approach to analyzing the incidentalome.
Berg, Jonathan S; Adams, Michael; Nassar, Nassib; Bizon, Chris; Lee, Kristy; Schmitt, Charles P; Wilhelmsen, Kirk C; Evans, James P
2013-01-01
Next-generation sequencing has transformed genetic research and is poised to revolutionize clinical diagnosis. However, the vast amount of data and inevitable discovery of incidental findings require novel analytic approaches. We therefore implemented for the first time a strategy that utilizes an a priori structured framework and a conservative threshold for selecting clinically relevant incidental findings. We categorized 2,016 genes linked with Mendelian diseases into "bins" based on clinical utility and validity, and used a computational algorithm to analyze 80 whole-genome sequences in order to explore the use of such an approach in a simulated real-world setting. The algorithm effectively reduced the number of variants requiring human review and identified incidental variants with likely clinical relevance. Incorporation of the Human Gene Mutation Database improved the yield for missense mutations but also revealed that a substantial proportion of purported disease-causing mutations were misleading. This approach is adaptable to any clinically relevant bin structure, scalable to the demands of a clinical laboratory workflow, and flexible with respect to advances in genomics. We anticipate that application of this strategy will facilitate pretest informed consent, laboratory analysis, and posttest return of results in a clinical context.
FUX-Sim: Implementation of a fast universal simulation/reconstruction framework for X-ray systems.
Abella, Monica; Serrano, Estefania; Garcia-Blas, Javier; García, Ines; de Molina, Claudia; Carretero, Jesus; Desco, Manuel
2017-01-01
The availability of digital X-ray detectors, together with advances in reconstruction algorithms, creates an opportunity for bringing 3D capabilities to conventional radiology systems. The downside is that reconstruction algorithms for non-standard acquisition protocols are generally based on iterative approaches that involve a high computational burden. The development of new flexible X-ray systems could benefit from computer simulations, which may enable performance to be checked before expensive real systems are implemented. The development of simulation/reconstruction algorithms in this context poses three main difficulties. First, the algorithms deal with large data volumes and are computationally expensive, thus leading to the need for hardware and software optimizations. Second, these optimizations are limited by the high flexibility required to explore new scanning geometries, including fully configurable positioning of source and detector elements. And third, the evolution of the various hardware setups increases the effort required for maintaining and adapting the implementations to current and future programming models. Previous works lack support for completely flexible geometries and/or compatibility with multiple programming models and platforms. In this paper, we present FUX-Sim, a novel X-ray simulation/reconstruction framework that was designed to be flexible and fast. Optimized implementation for different families of GPUs (CUDA and OpenCL) and multi-core CPUs was achieved thanks to a modularized approach based on a layered architecture and parallel implementation of the algorithms for both architectures. A detailed performance evaluation demonstrates that for different system configurations and hardware platforms, FUX-Sim maximizes performance with the CUDA programming model (5 times faster than other state-of-the-art implementations). Furthermore, the CPU and OpenCL programming models allow FUX-Sim to be executed over a wide range of hardware platforms.
Adaptive control strategies for flexible robotic arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity if not unstable closed-loop behavior. Therefore a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
The regulation of patient-reported outcome claims: need for a flexible standard.
Morris, Louis A; Miller, David W
2002-01-01
We review the FDA's policies for the regulation of patient-reported outcome (PRO) claims such as quality of life, productivity, satisfaction and symptom reports and suggest alternative standards for substantiation. We base our review on FDA regulatory activities and public statements in the field of advertising substantiation. We compare these activities to the FDA's label substantiation policies and policies for health-economic (HE) claim substantiation. There is an overt inconsistency between the FDA's policies for substantiation of PRO claims in product labels and substantiation for such claims in advertising materials. This results in a higher standard for PRO claims in promotional vehicles than in product labels. Rather than relying on a "substantial evidence" standard, the FDA should consider a more flexible standard, such as the one currently applied to information included in the Clinical Trials section of product labels, or adopting a "competent and reliable scientific evidence" standard as set forth in Section 114 of the Food and Drug Administration Modernization Act (FDAMA) for HE data. We conclude that there needs to be greater consistency for substantiation in product labels and promotional materials. Furthermore, reconceptualizing most PRO claims as benefit extrapolations as opposed to efficacy information suggests a less rigorous standard is necessary.
Explicit-Duration Hidden Markov Model Inference of UP-DOWN States from Continuous Signals
McFarland, James M.; Hahn, Thomas T. G.; Mehta, Mayank R.
2011-01-01
Neocortical neurons show UP-DOWN state (UDS) oscillations under a variety of conditions. These UDS have been extensively studied because of the insight they can yield into the functioning of cortical networks, and their proposed role in putative memory formation. A key element in these studies is determining the precise duration and timing of the UDS. These states are typically determined from the membrane potential of one or a small number of cells, which is often not sufficient to reliably estimate the state of an ensemble of neocortical neurons. The local field potential (LFP) provides an attractive method for determining the state of a patch of cortex with high spatio-temporal resolution; however current methods for inferring UDS from LFP signals lack the robustness and flexibility to be applicable when UDS properties may vary substantially within and across experiments. Here we present an explicit-duration hidden Markov model (EDHMM) framework that is sufficiently general to allow statistically principled inference of UDS from different types of signals (membrane potential, LFP, EEG), combinations of signals (e.g., multichannel LFP recordings) and signal features over long recordings where substantial non-stationarities are present. Using cortical LFPs recorded from urethane-anesthetized mice, we demonstrate that the proposed method allows robust inference of UDS. To illustrate the flexibility of the algorithm we show that it performs well on EEG recordings as well. We then validate these results using simultaneous recordings of the LFP and membrane potential (MP) of nearby cortical neurons, showing that our method offers significant improvements over standard methods. These results could be useful for determining functional connectivity of different brain regions, as well as understanding network dynamics. PMID:21738730
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
Core disruptive accident margin seal
Garin, John; Belsick, James C.
1978-01-01
An apparatus for sealing the annulus defined between a substantially cylindrical rotatable first riser assembly and plug combination disposed in a substantially cylindrical second riser assembly and plug combination of a nuclear reactor system. The apparatus comprises a flexible member disposed between the first and second riser components and attached to a metal member which is attached to an actuating mechanism. When the actuating mechanism is not actuated, the flexible member does not contact the riser components thus allowing the free rotation of the riser components. When desired, the actuating mechanism causes the flexible member to contact the first and second riser components in a manner to block the annulus defined between the riser components, thereby sealing the annulus between the riser components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wognum, S.; Chai, X.; Hulshof, M. C. C. M.
2013-02-15
Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumormore » and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.« less
MILSTAR's flexible substrate solar array: Lessons learned, addendum
NASA Technical Reports Server (NTRS)
Gibb, John
1990-01-01
MILSTAR's Flexible Substrate Solar Array (FSSA) is an evolutionary development of the lightweight, flexible substrate design pioneered at Lockheed during the seventies. Many of the features of the design are related to the Solar Array Flight Experiment (SAFE), flown on STS-41D in 1984. FSSA development has created a substantial technology base for future flexible substrate solar arrays such as the array for the Space Station Freedom. Lessons learned during the development of the FSSA can and should be applied to the Freedom array and other future flexible substrate designs.
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
How does symmetry impact the flexibility of proteins?
Schulze, Bernd; Sljoka, Adnan; Whiteley, Walter
2014-02-13
It is well known that (i) the flexibility and rigidity of proteins are central to their function, (ii) a number of oligomers with several copies of individual protein chains assemble with symmetry in the native state and (iii) added symmetry sometimes leads to added flexibility in structures. We observe that the most common symmetry classes of protein oligomers are also the symmetry classes that lead to increased flexibility in certain three-dimensional structures-and investigate the possible significance of this coincidence. This builds on the well-developed theory of generic rigidity of body-bar frameworks, which permits an analysis of the rigidity and flexibility of molecular structures such as proteins via fast combinatorial algorithms. In particular, we outline some very simple counting rules and possible algorithmic extensions that allow us to predict continuous symmetry-preserving motions in body-bar frameworks that possess non-trivial point-group symmetry. For simplicity, we focus on dimers, which typically assemble with twofold rotational axes, and often have allosteric function that requires motions to link distant sites on the two protein chains.
How does symmetry impact the flexibility of proteins?
Schulze, Bernd; Sljoka, Adnan; Whiteley, Walter
2014-01-01
It is well known that (i) the flexibility and rigidity of proteins are central to their function, (ii) a number of oligomers with several copies of individual protein chains assemble with symmetry in the native state and (iii) added symmetry sometimes leads to added flexibility in structures. We observe that the most common symmetry classes of protein oligomers are also the symmetry classes that lead to increased flexibility in certain three-dimensional structures—and investigate the possible significance of this coincidence. This builds on the well-developed theory of generic rigidity of body–bar frameworks, which permits an analysis of the rigidity and flexibility of molecular structures such as proteins via fast combinatorial algorithms. In particular, we outline some very simple counting rules and possible algorithmic extensions that allow us to predict continuous symmetry-preserving motions in body–bar frameworks that possess non-trivial point-group symmetry. For simplicity, we focus on dimers, which typically assemble with twofold rotational axes, and often have allosteric function that requires motions to link distant sites on the two protein chains. PMID:24379431
Approach to recognition of flexible form for credit card expiration date recognition as example
NASA Astrophysics Data System (ADS)
Sheshkus, Alexander; Nikolaev, Dmitry P.; Ingacheva, Anastasia; Skoryukina, Natalya
2015-12-01
In this paper we consider a task of finding information fields within document with flexible form for credit card expiration date field as example. We discuss main difficulties and suggest possible solutions. In our case this task is to be solved on mobile devices therefore computational complexity has to be as low as possible. In this paper we provide results of the analysis of suggested algorithm. Error distribution of the recognition system shows that suggested algorithm solves the task with required accuracy.
Object-Oriented Design for Sparse Direct Solvers
NASA Technical Reports Server (NTRS)
Dobrian, Florin; Kumfert, Gary; Pothen, Alex
1999-01-01
We discuss the object-oriented design of a software package for solving sparse, symmetric systems of equations (positive definite and indefinite) by direct methods. At the highest layers, we decouple data structure classes from algorithmic classes for flexibility. We describe the important structural and algorithmic classes in our design, and discuss the trade-offs we made for high performance. The kernels at the lower layers were optimized by hand. Our results show no performance loss from our object-oriented design, while providing flexibility, case of use, and extensibility over solvers using procedural design.
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive. PMID:27003958
Integrated geometry and grid generation system for complex configurations
NASA Technical Reports Server (NTRS)
Akdag, Vedat; Wulf, Armin
1992-01-01
A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.
Scanning holographic optical tweezers.
Shaw, L A; Panas, Robert M; Spadaccini, C M; Hopkins, J B
2017-08-01
The aim of this Letter is to introduce a new optical tweezers approach, called scanning holographic optical tweezers (SHOT), which drastically increases the working area (WA) of the holographic-optical tweezers (HOT) approach, while maintaining tightly focused laser traps. A 12-fold increase in the WA is demonstrated. The SHOT approach achieves its utility by combining the large WA of the scanning optical tweezers (SOT) approach with the flexibility of the HOT approach for simultaneously moving differently structured optical traps in and out of the focal plane. This Letter also demonstrates a new heuristic control algorithm for combining the functionality of the SOT and HOT approaches to efficiently allocate the available laser power among a large number of traps. The proposed approach shows promise for substantially increasing the number of particles that can be handled simultaneously, which would enable optical tweezers additive fabrication technologies to rapidly assemble microgranular materials and structures in reasonable build times.
An Efficient and Accurate Genetic Algorithm for Backcalculation of Flexible Pavement Layer Moduli
DOT National Transportation Integrated Search
2012-12-01
The importance of a backcalculation method in the analysis of elastic modulus in pavement engineering has been : known for decades. Despite many backcalculation programs employing different backcalculation procedures and : algorithms, accurate invers...
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.
Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm
Baig, Fahd; Little, Max A.
2016-01-01
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525
Spatial operator algebra for flexible multibody dynamics
NASA Technical Reports Server (NTRS)
Jain, A.; Rodriguez, G.
1993-01-01
This paper presents an approach to modeling the dynamics of flexible multibody systems such as flexible spacecraft and limber space robotic systems. A large number of degrees of freedom and complex dynamic interactions are typical in these systems. This paper uses spatial operators to develop efficient recursive algorithms for the dynamics of these systems. This approach very efficiently manages complexity by means of a hierarchy of mathematical operations.
Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm
ERIC Educational Resources Information Center
Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.
2009-01-01
Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…
Glaser, Johann; Beisteiner, Roland; Bauer, Herbert; Fischmeister, Florian Ph S
2013-11-09
In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230-239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720-737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches.
2013-01-01
Background In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. Results FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230–239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720–737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. Conclusion The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches. PMID:24206927
Applications of colored petri net and genetic algorithms to cluster tool scheduling
NASA Astrophysics Data System (ADS)
Liu, Tung-Kuan; Kuo, Chih-Jen; Hsiao, Yung-Chin; Tsai, Jinn-Tsong; Chou, Jyh-Horng
2005-12-01
In this paper, we propose a method, which uses Coloured Petri Net (CPN) and genetic algorithm (GA) to obtain an optimal deadlock-free schedule and to solve re-entrant problem for the flexible process of the cluster tool. The process of the cluster tool for producing a wafer usually can be classified into three types: 1) sequential process, 2) parallel process, and 3) sequential parallel process. But these processes are not economical enough to produce a variety of wafers in small volume. Therefore, this paper will propose the flexible process where the operations of fabricating wafers are randomly arranged to achieve the best utilization of the cluster tool. However, the flexible process may have deadlock and re-entrant problems which can be detected by CPN. On the other hand, GAs have been applied to find the optimal schedule for many types of manufacturing processes. Therefore, we successfully integrate CPN and GAs to obtain an optimal schedule with the deadlock and re-entrant problems for the flexible process of the cluster tool.
A Petri net synthesis theory for modeling flexible manufacturing systems.
Jeng, M D
1997-01-01
A theory that synthesizes Petri nets for modeling flexible manufacturing systems is presented. The theory adopts a bottom-up or modular-composition approach to construct net models. Each module is modeled as a resource control net (RCN), which represents a subsystem that controls a resource type in a flexible manufacturing system. Interactions among the modules are described as the common transition and transition subnets. The net obtained by merging the modules with two minimal restrictions is shown to be conservative and thus bounded. An algorithm is developed to detect two sufficient conditions for structural liveness of the net. The algorithm examines only the net's structure and the initial marking, and appears to be more efficient than state enumeration techniques such as the reachability tree method. In this paper, the sufficient conditions for liveness are shown to be related to some structural objects called siphons. To demonstrate the applicability of the theory, a flexible manufacturing system of a moderate size is modeled and analyzed using the proposed theory.
Improved ocean-color remote sensing in the Arctic using the POLYMER algorithm
NASA Astrophysics Data System (ADS)
Frouin, Robert; Deschamps, Pierre-Yves; Ramon, Didier; Steinmetz, François
2012-10-01
Atmospheric correction of ocean-color imagery in the Arctic brings some specific challenges that the standard atmospheric correction algorithm does not address, namely low solar elevation, high cloud frequency, multi-layered polar clouds, presence of ice in the field-of-view, and adjacency effects from highly reflecting surfaces covered by snow and ice and from clouds. The challenges may be addressed using a flexible atmospheric correction algorithm, referred to as POLYMER (Steinmetz and al., 2011). This algorithm does not use a specific aerosol model, but fits the atmospheric reflectance by a polynomial with a non spectral term that accounts for any non spectral scattering (clouds, coarse aerosol mode) or reflection (glitter, whitecaps, small ice surfaces within the instrument field of view), a spectral term with a law in wavelength to the power -1 (fine aerosol mode), and a spectral term with a law in wavelength to the power -4 (molecular scattering, adjacency effects from clouds and white surfaces). Tests are performed on selected MERIS imagery acquired over Arctic Seas. The derived ocean properties, i.e., marine reflectance and chlorophyll concentration, are compared with those obtained with the standard MEGS algorithm. The POLYMER estimates are more realistic in regions affected by the ice environment, e.g., chlorophyll concentration is higher near the ice edge, and spatial coverage is substantially increased. Good retrievals are obtained in the presence of thin clouds, with ocean-color features exhibiting spatial continuity from clear to cloudy regions. The POLYMER estimates of marine reflectance agree better with in situ measurements than the MEGS estimates. Biases are 0.001 or less in magnitude, except at 412 and 443 nm, where they reach 0.005 and 0.002, respectively, and root-mean-squared difference decreases from 0.006 at 412 nm to less than 0.001 at 620 and 665 nm. A first application to MODIS imagery is presented, revealing that the POLYMER algorithm is robust when pixels are contaminated by sea ice.
BIBLIO: A Reprint File Management Algorithm
ERIC Educational Resources Information Center
Zelnio, Robert N.; And Others
1977-01-01
The development of a simple computer algorithm designed for use by the individual educator or researcher in maintaining and searching reprint files is reported. Called BIBLIO, the system is inexpensive and easy to operate and maintain without sacrificing flexibility and utility. (LBH)
DOT National Transportation Integrated Search
2012-12-01
Backcalculation of pavement moduli has been an intensively researched subject for more than four decades. Despite the existence of many backcalculation programs employing different backcalculation procedures and algorithms, accurate inverse of the la...
High-power fused assemblies enabled by advances in fiber-processing technologies
NASA Astrophysics Data System (ADS)
Wiley, Robert; Clark, Brett
2011-02-01
The power handling capabilities of fiber lasers are limited by the technologies available to fabricate and assemble the key optical system components. Previous tools for the assembly, tapering, and fusion of fiber laser elements have had drawbacks with regard to temperature range, alignment capability, assembly flexibility and surface contamination. To provide expanded capabilities for fiber laser assembly, a wide-area electrical plasma heat source was used in conjunction with an optimized image analysis method and a flexible alignment system, integrated according to mechatronic principles. High-resolution imaging and vision-based measurement provided feedback to adjust assembly, fusion, and tapering process parameters. The system was used to perform assembly steps including dissimilar-fiber splicing, tapering, bundling, capillary bundling, and fusion of fibers to bulk optic devices up to several mm in diameter. A wide range of fiber types and diameters were tested, including extremely large diameters and photonic crystal fibers. The assemblies were evaluated for conformation to optical and mechanical design criteria, such as taper geometry and splice loss. The completed assemblies met the performance targets and exhibited reduced surface contamination compared to assemblies prepared on previously existing equipment. The imaging system and image analysis algorithms provided in situ fiber geometry measurement data that agreed well with external measurement. The ability to adjust operating parameters dynamically based on imaging was shown to provide substantial performance benefits, particularly in the tapering of fibers and bundles. The integrated design approach was shown to provide sufficient flexibility to perform all required operations with a minimum of reconfiguration.
NASA Astrophysics Data System (ADS)
Wang, Chun; Ji, Zhicheng; Wang, Yan
2017-07-01
In this paper, multi-objective flexible job shop scheduling problem (MOFJSP) was studied with the objects to minimize makespan, total workload and critical workload. A variable neighborhood evolutionary algorithm (VNEA) was proposed to obtain a set of Pareto optimal solutions. First, two novel crowded operators in terms of the decision space and object space were proposed, and they were respectively used in mating selection and environmental selection. Then, two well-designed neighborhood structures were used in local search, which consider the problem characteristics and can hold fast convergence. Finally, extensive comparison was carried out with the state-of-the-art methods specially presented for solving MOFJSP on well-known benchmark instances. The results show that the proposed VNEA is more effective than other algorithms in solving MOFJSP.
NASA Astrophysics Data System (ADS)
Buddala, Raviteja; Mahapatra, Siba Sankar
2017-11-01
Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.
SPS flexible system control assessment analysis
NASA Technical Reports Server (NTRS)
Balas, M. J.
1981-01-01
Active control of the Satellite Power System (SPS0, a large mechanically flexible aerospace structure is addressed. The control algorithm is the principle component in the feedback link from sensors to actuators. An analysis of the interaction of the SPS structure and its active control system is presented.
Vlsi implementation of flexible architecture for decision tree classification in data mining
NASA Astrophysics Data System (ADS)
Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak
2017-07-01
The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.
ERIC Educational Resources Information Center
Rehfuss, John
1995-01-01
Privatization calls for substantially trimming the scope and breadth of government services, replacing them with private or other nongovernmental operators. The attraction of privatization is reduced costs and increased management flexibility. To date, the arrangement has received substantial support from students and parents in situations that…
Flexible Space-Filling Designs for Complex System Simulations
2013-06-01
interior of the experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with...Computer Experiments, Design of Experiments, Genetic Algorithm , Latin Hypercube, Response Surface Methodology, Nearly Orthogonal 15. NUMBER OF PAGES 147...experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with minimal correlations
NASA Astrophysics Data System (ADS)
Zhu, Ruijie; Zhao, Yongli; Yang, Hui; Tan, Yuanlong; Chen, Haoran; Zhang, Jie; Jue, Jason P.
2016-08-01
Network virtualization can eradicate the ossification of the infrastructure and stimulate innovation of new network architectures and applications. Elastic optical networks (EONs) are ideal substrate networks for provisioning flexible virtual optical network (VON) services. However, as network traffic continues to increase exponentially, the capacity of EONs will reach the physical limitation soon. To further increase network flexibility and capacity, the concept of EONs is extended into the spatial domain. How to map the VON onto substrate networks by thoroughly using the spectral and spatial resources is extremely important. This process is called VON embedding (VONE).Considering the two kinds of resources at the same time during the embedding process, we propose two VONE algorithms, the adjacent link embedding algorithm (ALEA) and the remote link embedding algorithm (RLEA). First, we introduce a model to solve the VONE problem. Then we design the embedding ability measurement of network elements. Based on the network elements' embedding ability, two VONE algorithms were proposed. Simulation results show that the proposed VONE algorithms could achieve better performance than the baseline algorithm in terms of blocking probability and revenue-to-cost ratio.
Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen
2006-12-01
Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimizationmore » problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.« less
A novel imaging technique for measuring kinematics of light-weight flexible structures.
Zakaria, Mohamed Y; Eliethy, Ahmed S; Canfield, Robert A; Hajj, Muhammad R
2016-07-01
A new imaging algorithm is proposed to capture the kinematics of flexible, thin, light structures including frequencies and motion amplitudes for real time analysis. The studied case is a thin flexible beam that is preset at different angles of attack in a wind tunnel. As the angle of attack is increased beyond a critical value, the beam was observed to undergo a static deflection that is ensued by limit cycle oscillations. Imaging analysis of the beam vibrations shows that the motion consists of a superposition of the bending and torsion modes. The proposed algorithm was able to capture the oscillation amplitudes as well as the frequencies of both bending and torsion modes. The analysis results are validated through comparison with measurements from a piezoelectric sensor that is attached to the beam at its root.
A novel imaging technique for measuring kinematics of light-weight flexible structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakaria, Mohamed Y., E-mail: zakaria@vt.edu; Eliethy, Ahmed S.; Canfield, Robert A.
2016-07-15
A new imaging algorithm is proposed to capture the kinematics of flexible, thin, light structures including frequencies and motion amplitudes for real time analysis. The studied case is a thin flexible beam that is preset at different angles of attack in a wind tunnel. As the angle of attack is increased beyond a critical value, the beam was observed to undergo a static deflection that is ensued by limit cycle oscillations. Imaging analysis of the beam vibrations shows that the motion consists of a superposition of the bending and torsion modes. The proposed algorithm was able to capture the oscillationmore » amplitudes as well as the frequencies of both bending and torsion modes. The analysis results are validated through comparison with measurements from a piezoelectric sensor that is attached to the beam at its root.« less
Thin film with oriented cracks on a flexible substrate
Feng, Bao; McGilvray, Andrew; Shi, Bo
2010-07-27
A thermoelectric film is disclosed. The thermoelectric film includes a substrate that is substantially electrically non-conductive and flexible and a thermoelectric material that is deposited on at least one surface of the substrate. The thermoelectric film also includes multiple cracks oriented in a predetermined direction.
A new memetic algorithm for mitigating tandem automated guided vehicle system partitioning problem
NASA Astrophysics Data System (ADS)
Pourrahimian, Parinaz
2017-11-01
Automated Guided Vehicle System (AGVS) provides the flexibility and automation demanded by Flexible Manufacturing System (FMS). However, with the growing concern on responsible management of resource use, it is crucial to manage these vehicles in an efficient way in order reduces travel time and controls conflicts and congestions. This paper presents the development process of a new Memetic Algorithm (MA) for optimizing partitioning problem of tandem AGVS. MAs employ a Genetic Algorithm (GA), as a global search, and apply a local search to bring the solutions to a local optimum point. A new Tabu Search (TS) has been developed and combined with a GA to refine the newly generated individuals by GA. The aim of the proposed algorithm is to minimize the maximum workload of the system. After all, the performance of the proposed algorithm is evaluated using Matlab. This study also compared the objective function of the proposed MA with GA. The results showed that the TS, as a local search, significantly improves the objective function of the GA for different system sizes with large and small numbers of zone by 1.26 in average.
Flexibility of Bricard's linkages and other structures via resultants and computer algebra.
Lewis, Robert H; Coutsias, Evangelos A
2016-07-01
Flexibility of structures is extremely important for chemistry and robotics. Following our earlier work, we study flexibility using polynomial equations, resultants, and a symbolic algorithm of our creation that analyzes the resultant. We show that the software solves a classic arrangement of quadrilaterals in the plane due to Bricard. We fill in several gaps in Bricard's work and discover new flexible arrangements that he was apparently unaware of. This provides strong evidence for the maturity of the software, and is a wonderful example of mathematical discovery via computer assisted experiment.
Knowledge-Guided Docking of WW Domain Proteins and Flexible Ligands
NASA Astrophysics Data System (ADS)
Lu, Haiyun; Li, Hao; Banu Bte Sm Rashid, Shamima; Leow, Wee Kheng; Liou, Yih-Cherng
Studies of interactions between protein domains and ligands are important in many aspects such as cellular signaling. We present a knowledge-guided approach for docking protein domains and flexible ligands. The approach is applied to the WW domain, a small protein module mediating signaling complexes which have been implicated in diseases such as muscular dystrophy and Liddle’s syndrome. The first stage of the approach employs a substring search for two binding grooves of WW domains and possible binding motifs of peptide ligands based on known features. The second stage aligns the ligand’s peptide backbone to the two binding grooves using a quasi-Newton constrained optimization algorithm. The backbone-aligned ligands produced serve as good starting points to the third stage which uses any flexible docking algorithm to perform the docking. The experimental results demonstrate that the backbone alignment method in the second stage performs better than conventional rigid superposition given two binding constraints. It is also shown that using the backbone-aligned ligands as initial configurations improves the flexible docking in the third stage. The presented approach can also be applied to other protein domains that involve binding of flexible ligand to two or more binding sites.
Fast and anisotropic flexibility-rigidity index for protein flexibility and fluctuation analysis
NASA Astrophysics Data System (ADS)
Opron, Kristopher; Xia, Kelin; Wei, Guo-Wei
2014-06-01
Protein structural fluctuation, typically measured by Debye-Waller factors, or B-factors, is a manifestation of protein flexibility, which strongly correlates to protein function. The flexibility-rigidity index (FRI) is a newly proposed method for the construction of atomic rigidity functions required in the theory of continuum elasticity with atomic rigidity, which is a new multiscale formalism for describing excessively large biomolecular systems. The FRI method analyzes protein rigidity and flexibility and is capable of predicting protein B-factors without resorting to matrix diagonalization. A fundamental assumption used in the FRI is that protein structures are uniquely determined by various internal and external interactions, while the protein functions, such as stability and flexibility, are solely determined by the structure. As such, one can predict protein flexibility without resorting to the protein interaction Hamiltonian. Consequently, bypassing the matrix diagonalization, the original FRI has a computational complexity of O(N^2). This work introduces a fast FRI (fFRI) algorithm for the flexibility analysis of large macromolecules. The proposed fFRI further reduces the computational complexity to O(N). Additionally, we propose anisotropic FRI (aFRI) algorithms for the analysis of protein collective dynamics. The aFRI algorithms permit adaptive Hessian matrices, from a completely global 3N × 3N matrix to completely local 3 × 3 matrices. These 3 × 3 matrices, despite being calculated locally, also contain non-local correlation information. Eigenvectors obtained from the proposed aFRI algorithms are able to demonstrate collective motions. Moreover, we investigate the performance of FRI by employing four families of radial basis correlation functions. Both parameter optimized and parameter-free FRI methods are explored. Furthermore, we compare the accuracy and efficiency of FRI with some established approaches to flexibility analysis, namely, normal mode analysis and Gaussian network model (GNM). The accuracy of the FRI method is tested using four sets of proteins, three sets of relatively small-, medium-, and large-sized structures and an extended set of 365 proteins. A fifth set of proteins is used to compare the efficiency of the FRI, fFRI, aFRI, and GNM methods. Intensive validation and comparison indicate that the FRI, particularly the fFRI, is orders of magnitude more efficient and about 10% more accurate overall than some of the most popular methods in the field. The proposed fFRI is able to predict B-factors for α-carbons of the HIV virus capsid (313 236 residues) in less than 30 seconds on a single processor using only one core. Finally, we demonstrate the application of FRI and aFRI to protein domain analysis.
Fast and anisotropic flexibility-rigidity index for protein flexibility and fluctuation analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opron, Kristopher; Xia, Kelin; Wei, Guo-Wei, E-mail: wei@math.msu.edu
Protein structural fluctuation, typically measured by Debye-Waller factors, or B-factors, is a manifestation of protein flexibility, which strongly correlates to protein function. The flexibility-rigidity index (FRI) is a newly proposed method for the construction of atomic rigidity functions required in the theory of continuum elasticity with atomic rigidity, which is a new multiscale formalism for describing excessively large biomolecular systems. The FRI method analyzes protein rigidity and flexibility and is capable of predicting protein B-factors without resorting to matrix diagonalization. A fundamental assumption used in the FRI is that protein structures are uniquely determined by various internal and external interactions,more » while the protein functions, such as stability and flexibility, are solely determined by the structure. As such, one can predict protein flexibility without resorting to the protein interaction Hamiltonian. Consequently, bypassing the matrix diagonalization, the original FRI has a computational complexity of O(N{sup 2}). This work introduces a fast FRI (fFRI) algorithm for the flexibility analysis of large macromolecules. The proposed fFRI further reduces the computational complexity to O(N). Additionally, we propose anisotropic FRI (aFRI) algorithms for the analysis of protein collective dynamics. The aFRI algorithms permit adaptive Hessian matrices, from a completely global 3N × 3N matrix to completely local 3 × 3 matrices. These 3 × 3 matrices, despite being calculated locally, also contain non-local correlation information. Eigenvectors obtained from the proposed aFRI algorithms are able to demonstrate collective motions. Moreover, we investigate the performance of FRI by employing four families of radial basis correlation functions. Both parameter optimized and parameter-free FRI methods are explored. Furthermore, we compare the accuracy and efficiency of FRI with some established approaches to flexibility analysis, namely, normal mode analysis and Gaussian network model (GNM). The accuracy of the FRI method is tested using four sets of proteins, three sets of relatively small-, medium-, and large-sized structures and an extended set of 365 proteins. A fifth set of proteins is used to compare the efficiency of the FRI, fFRI, aFRI, and GNM methods. Intensive validation and comparison indicate that the FRI, particularly the fFRI, is orders of magnitude more efficient and about 10% more accurate overall than some of the most popular methods in the field. The proposed fFRI is able to predict B-factors for α-carbons of the HIV virus capsid (313 236 residues) in less than 30 seconds on a single processor using only one core. Finally, we demonstrate the application of FRI and aFRI to protein domain analysis.« less
Accelerated probabilistic inference of RNA structure evolution
Holmes, Ian
2005-01-01
Background Pairwise stochastic context-free grammars (Pair SCFGs) are powerful tools for evolutionary analysis of RNA, including simultaneous RNA sequence alignment and secondary structure prediction, but the associated algorithms are intensive in both CPU and memory usage. The same problem is faced by other RNA alignment-and-folding algorithms based on Sankoff's 1985 algorithm. It is therefore desirable to constrain such algorithms, by pre-processing the sequences and using this first pass to limit the range of structures and/or alignments that can be considered. Results We demonstrate how flexible classes of constraint can be imposed, greatly reducing the computational costs while maintaining a high quality of structural homology prediction. Any score-attributed context-free grammar (e.g. energy-based scoring schemes, or conditionally normalized Pair SCFGs) is amenable to this treatment. It is now possible to combine independent structural and alignment constraints of unprecedented general flexibility in Pair SCFG alignment algorithms. We outline several applications to the bioinformatics of RNA sequence and structure, including Waterman-Eggert N-best alignments and progressive multiple alignment. We evaluate the performance of the algorithm on test examples from the RFAM database. Conclusion A program, Stemloc, that implements these algorithms for efficient RNA sequence alignment and structure prediction is available under the GNU General Public License. PMID:15790387
Implementation of input command shaping to reduce vibration in flexible space structures
NASA Technical Reports Server (NTRS)
Chang, Kenneth W.; Seering, Warren P.; Rappole, B. Whitney
1992-01-01
Viewgraphs on implementation of input command shaping to reduce vibration in flexible space structures are presented. Goals of the research are to explore theory of input command shaping to find an efficient algorithm for flexible space structures; to characterize Middeck Active Control Experiment (MACE) test article; and to implement input shaper on the MACE structure and interpret results. Background on input shaping, simulation results, experimental results, and future work are included.
Creating Connections: College Innovations in Flexibility, Access and Participation.
ERIC Educational Resources Information Center
Further Education Development Agency, London (England).
This document contains 14 papers explaining how 12 further education colleges in the United Kingdom used fellowship funds to maximize their use of current information and learning technologies and make other substantial innovations to improve their flexibility, accessibility, and rates of participation. The following papers are included:…
An Online Prediction Platform to Support the Environmental ...
Historical QSAR models are currently utilized across a broad range of applications within the U.S. Environmental Protection Agency (EPA). These models predict basic physicochemical properties (e.g., logP, aqueous solubility, vapor pressure), which are then incorporated into exposure, fate and transport models. Whereas the classical manner of publishing results in peer-reviewed journals remains appropriate, there are substantial benefits to be gained by providing enhanced, open access to the training data sets and resulting models. Benefits include improved transparency, more flexibility to expand training sets and improve model algorithms, and greater ability to independently characterize model performance both globally and in local areas of chemistry. We have developed a web-based prediction platform that uses open-source descriptors and modeling algorithms, employs modern cheminformatics technologies, and is tailored for ease of use by the toxicology and environmental regulatory community. This tool also provides web-services to meet both EPA’s projects and the modeling community at-large. The platform hosts models developed within EPA’s National Center for Computational Toxicology, as well as those developed by other EPA scientists and the outside scientific community. Recognizing that there are other on-line QSAR model platforms currently available which have additional capabilities, we connect to such services, where possible, to produce an integrated
Epigraph: A Vaccine Design Tool Applied to an HIV Therapeutic Vaccine and a Pan-Filovirus Vaccine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theiler, James; Yoon, Hyejin; Yusim, Karina
Epigraph is an efficient graph-based algorithm for designing vaccine antigens to optimize potential T-cell epitope (PTE) coverage. Functionally, epigraph vaccine antigens are similar to Mosaic vaccines, which have demonstrated effectiveness in preliminary HIV non-human primate studies. In contrast to the Mosaic algorithm, Epigraph is substantially faster, and in restricted cases, provides a mathematically optimal solution. Furthermore, epigraph has new features that enable enhanced vaccine design flexibility. These features include the ability to exclude rare epitopes from a design, to optimize population coverage based on inexact epitope matches, and to apply the code to both aligned and unaligned input sequences. Epigraphmore » was developed to provide practical design solutions for two outstanding vaccine problems. The first of these is a personalized approach to a therapeutic T-cell HIV vaccine that would provide antigens with an excellent match to an individual’s infecting strain, intended to contain or clear a chronic infection. The second is a pan-filovirus vaccine, with the potential to protect against all known viruses in the Filoviradae family, including ebolaviruses. A web-based interface to run the Epigraph tool suite is available (http://www.hiv.lanl.gov/content/sequence/EPIGRAPH/epigraph.html).« less
Epigraph: A Vaccine Design Tool Applied to an HIV Therapeutic Vaccine and a Pan-Filovirus Vaccine
Theiler, James; Yoon, Hyejin; Yusim, Karina; ...
2016-10-05
Epigraph is an efficient graph-based algorithm for designing vaccine antigens to optimize potential T-cell epitope (PTE) coverage. Functionally, epigraph vaccine antigens are similar to Mosaic vaccines, which have demonstrated effectiveness in preliminary HIV non-human primate studies. In contrast to the Mosaic algorithm, Epigraph is substantially faster, and in restricted cases, provides a mathematically optimal solution. Furthermore, epigraph has new features that enable enhanced vaccine design flexibility. These features include the ability to exclude rare epitopes from a design, to optimize population coverage based on inexact epitope matches, and to apply the code to both aligned and unaligned input sequences. Epigraphmore » was developed to provide practical design solutions for two outstanding vaccine problems. The first of these is a personalized approach to a therapeutic T-cell HIV vaccine that would provide antigens with an excellent match to an individual’s infecting strain, intended to contain or clear a chronic infection. The second is a pan-filovirus vaccine, with the potential to protect against all known viruses in the Filoviradae family, including ebolaviruses. A web-based interface to run the Epigraph tool suite is available (http://www.hiv.lanl.gov/content/sequence/EPIGRAPH/epigraph.html).« less
Numerical algorithms for computations of feedback laws arising in control of flexible systems
NASA Technical Reports Server (NTRS)
Lasiecka, Irena
1989-01-01
Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.
NASA Astrophysics Data System (ADS)
Reniers, Jorn M.; Mulder, Grietus; Ober-Blöbaum, Sina; Howey, David A.
2018-03-01
The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11% to 5% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.
A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Obergfell, Klaus
1991-01-01
The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.
Computing the Envelope for Stepwise-Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2002-01-01
Computing tight resource-level bounds is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with nodes equal to the events and edges equal to the necessary predecessor links between events. A staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. Each stage has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible and promising for use in the inner loop of flexible-time scheduling algorithms.
Fast algorithm for computing complex number-theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
NASA Astrophysics Data System (ADS)
Ignatyev, A. V.; Ignatyev, V. A.; Onischenko, E. V.
2017-11-01
This article is the continuation of the work made bt the authors on the development of the algorithms that implement the finite element method in the form of a classical mixed method for the analysis of geometrically nonlinear bar systems [1-3]. The paper describes an improved algorithm of the formation of the nonlinear governing equations system for flexible plane frames and bars with large displacements of nodes based on the finite element method in a mixed classical form and the use of the procedure of step-by-step loading. An example of the analysis is given.
Least-squares sequential parameter and state estimation for large space structures
NASA Technical Reports Server (NTRS)
Thau, F. E.; Eliazov, T.; Montgomery, R. C.
1982-01-01
This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.
Adaptive optics based non-null interferometry for optical free form surfaces test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Zhou, Sheng; Li, Jingsong; Yu, Benli
2018-03-01
An adaptive optics based non-null interferometry (ANI) is proposed for optical free form surfaces testing, in which an open-loop deformable mirror (DM) is employed as a reflective compensator, to compensate various low-order aberrations flexibly. The residual wavefront aberration is treated by the multi-configuration ray tracing (MCRT) algorithm. The MCRT algorithm based on the simultaneous ray tracing for multiple system models, in which each model has different DM surface deformation. With the MCRT algorithm, the final figure error can be extracted together with the surface misalignment aberration correction after the initial system calibration. The flexible test for free form surface is achieved with high accuracy, without auxiliary device for DM deformation monitoring. Experiments proving the feasibility, repeatability and high accuracy of the ANI were carried out to test a bi-conic surface and a paraboloidal surface, with a high stable ALPAOTM DM88. The accuracy of the final test result of the paraboloidal surface was better than 1/20 Μ PV value. It is a successful attempt in research of flexible optical free form surface metrology and would have enormous potential in future application with the development of the DM technology.
Agile Multi-Scale Decompositions for Automatic Image Registration
NASA Technical Reports Server (NTRS)
Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline
2016-01-01
In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.
The SAPHIRE server: a new algorithm and implementation.
Hersh, W.; Leone, T. J.
1995-01-01
SAPHIRE is an experimental information retrieval system implemented to test new approaches to automated indexing and retrieval of medical documents. Due to limitations in its original concept-matching algorithm, a modified algorithm has been implemented which allows greater flexibility in partial matching and different word order within concepts. With the concomitant growth in client-server applications and the Internet in general, the new algorithm has been implemented as a server that can be accessed via other applications on the Internet. PMID:8563413
A reconsideration of negative ratings for network-based recommendation
NASA Astrophysics Data System (ADS)
Hu, Liang; Ren, Liang; Lin, Wenbin
2018-01-01
Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.
Revised motion estimation algorithm for PROPELLER MRI.
Pipe, James G; Gibbs, Wende N; Li, Zhiqiang; Karis, John P; Schar, Michael; Zwart, Nicholas R
2014-08-01
To introduce a new algorithm for estimating data shifts (used for both rotation and translation estimates) for motion-corrected PROPELLER MRI. The method estimates shifts for all blades jointly, emphasizing blade-pair correlations that are both strong and more robust to noise. The heads of three volunteers were scanned using a PROPELLER acquisition while they exhibited various amounts of motion. All data were reconstructed twice, using motion estimates from the original and new algorithm. Two radiologists independently and blindly compared 216 image pairs from these scans, ranking the left image as substantially better or worse than, slightly better or worse than, or equivalent to the right image. In the aggregate of 432 scores, the new method was judged substantially better than the old method 11 times, and was never judged substantially worse. The new algorithm compared favorably with the old in its ability to estimate bulk motion in a limited study of volunteer motion. A larger study of patients is planned for future work. Copyright © 2013 Wiley Periodicals, Inc.
Richter, H.G.; Gillespie, A.S. Jr.
1963-11-12
A flexible Geiger counter constructed from materials composed of vinyl chloride polymerized with plasticizers or co-polymers is presented. The counter can be made either by attaching short segments of corrugated plastic sleeving together, or by starting with a length of vacuum cleaner hose composed of the above materials. The anode is maintained substantially axial Within the sleeving or hose during tube flexing by means of polystyrene spacer disks or an easily assembled polyethylene flexible cage assembly. The cathode is a wire spiraled on the outside of the counter. The sleeving or hose is fitted with glass end-pieces or any other good insulator to maintain the anode wire taut and to admit a counting gas mixture into the counter. Having the cathode wire on the outside of the counter substantially eliminates the objectional sheath effect of prior counters and permits counting rates up to 300,000 counts per minute. (AEC)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-22
... appropriate? Procedural Matters Initial Regulatory Flexibility Analysis As required by the Regulatory... Flexibility Analysis (IRFA) of the possible significant economic impact on a substantial number of small... Reconfiguration Agreement with Sprint Nextel would enter TA-sponsored mediation. The reconfiguration of the 800...
Bi-dimensional null model analysis of presence-absence binary matrices.
Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J
2018-01-01
Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.
The Structure-Mapping Engine: Algorithm and Examples.
ERIC Educational Resources Information Center
Falkenhainer, Brian; And Others
This description of the Structure-Mapping Engine (SME), a flexible, cognitive simulation program for studying analogical processing which is based on Gentner's Structure-Mapping theory of analogy, points out that the SME provides a "tool kit" for constructing matching algorithms consistent with this theory. This report provides: (1) a…
Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N , in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.
Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687
Optimal Coordination of Building Loads and Energy Storage for Power Grid and End User Services
Hao, He; Wu, Di; Lian, Jianming; ...
2017-01-18
Demand response and energy storage play a profound role in the smart grid. The focus of this study is to evaluate benefits of coordinating flexible loads and energy storage to provide power grid and end user services. We present a Generalized Battery Model (GBM) to describe the flexibility of building loads and energy storage. An optimization-based approach is proposed to characterize the parameters (power and energy limits) of the GBM for flexible building loads. We then develop optimal coordination algorithms to provide power grid and end user services such as energy arbitrage, frequency regulation, spinning reserve, as well as energymore » cost and demand charge reduction. Several case studies have been performed to demonstrate the efficacy of the GBM and coordination algorithms, and evaluate the benefits of using their flexibility for power grid and end user services. We show that optimal coordination yields significant cost savings and revenue. Moreover, the best option for power grid services is to provide energy arbitrage and frequency regulation. Finally and furthermore, when coordinating flexible loads with energy storage to provide end user services, it is recommended to consider demand charge in addition to time-of-use price in order to flatten the aggregate power profile.« less
Flexible Residential Smart Grid Simulation Framework
NASA Astrophysics Data System (ADS)
Xiang, Wang
Different scheduling and coordination algorithms controlling household appliances' operations can potentially lead to energy consumption reduction and/or load balancing in conjunction with different electricity pricing methods used in smart grid programs. In order to easily implement different algorithms and evaluate their efficiency against other ideas, a flexible simulation framework is desirable in both research and business fields. However, such a platform is currently lacking or underdeveloped. In this thesis, we provide a simulation framework to focus on demand side residential energy consumption coordination in response to different pricing methods. This simulation framework, equipped with an appliance consumption library using realistic values, aims to closely represent the average usage of different types of appliances. The simulation results of traditional usage yield close matching values compared to surveyed real life consumption records. Several sample coordination algorithms, pricing schemes, and communication scenarios are also implemented to illustrate the use of the simulation framework.
A comparison between IMSC, PI and MIMSC methods in controlling the vibration of flexible systems
NASA Technical Reports Server (NTRS)
Baz, A.; Poh, S.
1987-01-01
A comparative study is presented between three active control algorithms which have proven to be successful in controlling the vibrations of large flexible systems. These algorithms are: the Independent Modal Space Control (IMSC), the Pseudo-inverse (PI), and the Modified Independent Modal Space Control (MIMSC). Emphasis is placed on demonstrating the effectiveness of the MIMSC method in controlling the vibration of large systems with small number of actuators by using an efficient time sharing strategy. Such a strategy favors the MIMSC over the IMSC method, which requires a large number of actuators to control equal number of modes, and also over the PI method which attempts to control large number of modes with smaller number of actuators through the use of an in-exact statistical realization of a modal controller. Numerical examples are presented to illustrate the main features of the three algorithms and the merits of the MIMSC method.
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
Knowledge of damage identification about tensegrities via flexibility disassembly
NASA Astrophysics Data System (ADS)
Jiang, Ge; Feng, Xiaodong; Du, Shigui
2017-12-01
Tensegrity structures composing of continuous cables and discrete struts are under tension and compression, respectively. In order to determine the damage extents of tensegrity structures, a new method for tensegrity structural damage identification is presented based on flexibility disassembly. To decompose a tensegrity structural flexibility matrix into the matrix represention of the connectivity between degress-of-freedoms and the diagonal matrix comprising of magnitude informations. Step 1: Calculate perturbation flexibility; Step 2: Compute the flexibility connectivity matrix and perturbation flexibility parameters; Step 3: Calculate the perturbation stiffness parameters. The efficiency of the proposed method is demonstrated by a numeical example comprising of 12 cables and 4 struts with pretensioned. Accurate identification of local damage depends on the availability of good measured data, an accurate and reasonable algorithm.
Flexible Electronics-Based Transformers for Extreme Environments
NASA Technical Reports Server (NTRS)
Quadrelli, Marco B.; Stoica, Adrian; Ingham, Michel; Thakur, Anubhav
2015-01-01
This paper provides a survey of the use of modular multifunctional systems, called Flexible Transformers, to facilitate the exploration of extreme and previously inaccessible environments. A novel dynamics and control model of a modular algorithm for assembly, folding, and unfolding of these innovative structural systems is also described, together with the control model and the simulation results.
A Program Structure for Event-Based Speech Synthesis by Rules within a Flexible Segmental Framework.
ERIC Educational Resources Information Center
Hill, David R.
1978-01-01
A program structure based on recently developed techniques for operating system simulation has the required flexibility for use as a speech synthesis algorithm research framework. This program makes synthesis possible with less rigid time and frequency-component structure than simpler schemes. It also meets real-time operation and memory-size…
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
Pin fin compliant heat sink with enhanced flexibility
Schultz, Mark D.
2018-04-10
Heat sinks and methods of using the same include a top and bottom plate, at least one of which has a plurality of pin contacts flexibly connected to one another, where the plurality of pin contacts have vertical and lateral flexibility with respect to one another; and pin slice layers, each having multiple pin slices, arranged vertically between the top and bottom plates such that the plurality of pin slices form substantially vertical pins connecting the top and bottom plates.
Toward Generalization of Iterative Small Molecule Synthesis
Lehmann, Jonathan W.; Blair, Daniel J.; Burke, Martin D.
2018-01-01
Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the “building block approach”, i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach. PMID:29696152
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
Xu, Lu-Hai; Ou, Qing-Dong; Li, Yan-Qing; Zhang, Yi-Bo; Zhao, Xin-Dong; Xiang, Heng-Yang; Chen, Jing-De; Zhou, Lei; Lee, Shuit-Tong; Tang, Jian-Xin
2016-01-26
Flexible organic light-emitting diodes (OLEDs) hold great promise for future bendable display and curved lighting applications. One key challenge of high-performance flexible OLEDs is to develop new flexible transparent conductive electrodes with superior mechanical, electrical, and optical properties. Herein, an effective nanostructured metal/dielectric composite electrode on a plastic substrate is reported by combining a quasi-random outcoupling structure for broadband and angle-independent light outcoupling of white emission with an ultrathin metal alloy film for optimum optical transparency, electrical conduction, and mechanical flexibility. The microcavity effect and surface plasmonic loss can be remarkably reduced in white flexible OLEDs, resulting in a substantial increase in the external quantum efficiency and power efficiency to 47.2% and 112.4 lm W(-1).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Im, Piljae; Munk, Jeffrey D; Gehl, Anthony C
2015-06-01
A research project “Evaluation of Variable Refrigerant Flow (VRF) Systems Performance and the Enhanced Control Algorithm on Oak Ridge National Laboratory’s (ORNL’s) Flexible Research Platform” was performed to (1) install and validate the performance of Samsung VRF systems compared with the baseline rooftop unit (RTU) variable-air-volume (VAV) system and (2) evaluate the enhanced control algorithm for the VRF system on the two-story flexible research platform (FRP) in Oak Ridge, Tennessee. Based on the VRF system designed by Samsung and ORNL, the system was installed from February 18 through April 15, 2014. The final commissioning and system optimization were completed onmore » June 2, 2014, and the initial test for system operation was started the following day, June 3, 2014. In addition, the enhanced control algorithm was implemented and updated on June 18. After a series of additional commissioning actions, the energy performance data from the RTU and the VRF system were monitored from July 7, 2014, through February 28, 2015. Data monitoring and analysis were performed for the cooling season and heating season separately, and the calibrated simulation model was developed and used to estimate the energy performance of the RTU and VRF systems. This final report includes discussion of the design and installation of the VRF system, the data monitoring and analysis plan, the cooling season and heating season data analysis, and the building energy modeling study« less
Evaluation of the novel algorithm of flexible ligand docking with moveable target-protein atoms.
Sulimov, Alexey V; Zheltkov, Dmitry A; Oferkin, Igor V; Kutov, Danil C; Katkova, Ekaterina V; Tyrtyshnikov, Eugene E; Sulimov, Vladimir B
2017-01-01
We present the novel docking algorithm based on the Tensor Train decomposition and the TT-Cross global optimization. The algorithm is applied to the docking problem with flexible ligand and moveable protein atoms. The energy of the protein-ligand complex is calculated in the frame of the MMFF94 force field in vacuum. The grid of precalculated energy potentials of probe ligand atoms in the field of the target protein atoms is not used. The energy of the protein-ligand complex for any given configuration is computed directly with the MMFF94 force field without any fitting parameters. The conformation space of the system coordinates is formed by translations and rotations of the ligand as a whole, by the ligand torsions and also by Cartesian coordinates of the selected target protein atoms. Mobility of protein and ligand atoms is taken into account in the docking process simultaneously and equally. The algorithm is realized in the novel parallel docking SOL-P program and results of its performance for a set of 30 protein-ligand complexes are presented. Dependence of the docking positioning accuracy is investigated as a function of parameters of the docking algorithm and the number of protein moveable atoms. It is shown that mobility of the protein atoms improves docking positioning accuracy. The SOL-P program is able to perform docking of a flexible ligand into the active site of the target protein with several dozens of protein moveable atoms: the native crystallized ligand pose is correctly found as the global energy minimum in the search space with 157 dimensions using 4700 CPU ∗ h at the Lomonosov supercomputer.
An E-Learning Environment for Algorithmic: Toward an Active Construction of Skills
ERIC Educational Resources Information Center
Babori, Abdelghani; Fassi, Hicham Fihri; Hariri, Abdellah; Bideq, Mustapha
2016-01-01
Assimilating an algorithmic course is a persistent problem for many undergraduate students. The major problem faced by students is the lack of problem solving ability and flexibility. Therefore, students are generally passive, unmotivated and unable to mobilize all the acquired knowledge (loops, test, variables, etc.) to deal with new encountered…
Evaluation of on-line pulse control for vibration suppression in flexible spacecraft
NASA Technical Reports Server (NTRS)
Masri, Sami F.
1987-01-01
A numerical simulation was performed, by means of a large-scale finite element code capable of handling large deformations and/or nonlinear behavior, to investigate the suitability of the nonlinear pulse-control algorithm to suppress the vibrations induced in the Spacecraft Control Laboratory Experiment (SCOLE) components under realistic maneuvers. Among the topics investigated were the effects of various control parameters on the efficiency and robustness of the vibration control algorithm. Advanced nonlinear control techniques were applied to an idealized model of some of the SCOLE components to develop an efficient algorithm to determine the optimal locations of point actuators, considering the hardware on the SCOLE project as distributed in nature. The control was obtained from a quadratic optimization criterion, given in terms of the state variables of the distributed system. An experimental investigation was performed on a model flexible structure resembling the essential features of the SCOLE components, and electrodynamic and electrohydraulic actuators were used to investigate the applicability of the control algorithm with such devices in addition to mass-ejection pulse generators using compressed air.
NASA Technical Reports Server (NTRS)
Thau, F. E.; Montgomery, R. C.
1980-01-01
Techniques developed for the control of aircraft under changing operating conditions are used to develop a learning control system structure for a multi-configuration, flexible space vehicle. A configuration identification subsystem that is to be used with a learning algorithm and a memory and control process subsystem is developed. Adaptive gain adjustments can be achieved by this learning approach without prestoring of large blocks of parameter data and without dither signal inputs which will be suppressed during operations for which they are not compatible. The Space Shuttle Solar Electric Propulsion (SEP) experiment is used as a sample problem for the testing of adaptive/learning control system algorithms.
Process Materialization Using Templates and Rules to Design Flexible Process Models
NASA Astrophysics Data System (ADS)
Kumar, Akhil; Yao, Wen
The main idea in this paper is to show how flexible processes can be designed by combining generic process templates and business rules. We instantiate a process by applying rules to specific case data, and running a materialization algorithm. The customized process instance is then executed in an existing workflow engine. We present an architecture and also give an algorithm for process materialization. The rules are written in a logic-based language like Prolog. Our focus is on capturing deeper process knowledge and achieving a holistic approach to robust process design that encompasses control flow, resources and data, as well as makes it easier to accommodate changes to business policy.
Ku-band antenna acquisition and tracking performance study, volume 4
NASA Technical Reports Server (NTRS)
Huang, T. C.; Lindsey, W. C.
1977-01-01
The results pertaining to the tradeoff analysis and performance of the Ku-band shuttle antenna pointing and signal acquisition system are presented. The square, hexagonal and spiral antenna trajectories were investigated assuming the TDRS postulated uncertainty region and a flexible statistical model for the location of the TDRS within the uncertainty volume. The scanning trajectories, shuttle/TDRS signal parameters and dynamics, and three signal acquisition algorithms were integrated into a hardware simulation. The hardware simulation is quite flexible in that it allows for the evaluation of signal acquisition performance for an arbitrary (programmable) antenna pattern, a large range of C/N sub O's, various TDRS/shuttle a priori uncertainty distributions, and three distinct signal search algorithms.
A new fast algorithm for computing a complex number: Theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix fast Fourier transformation (FFT) algorithm for computing transforms over GF(sq q), where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
Large planar maneuvers for articulated flexible manipulators
NASA Technical Reports Server (NTRS)
Huang, Jen-Kuang; Yang, Li-Farn
1988-01-01
An articulated flexible manipulator carried on a translational cart is maneuvered by an active controller to perform certain position control tasks. The nonlinear dynamics of the articulated flexible manipulator are derived and a transformation matrix is formulated to localize the nonlinearities within the inertia matrix. Then a feedback linearization scheme is introduced to linearize the dynamic equations for controller design. Through a pole placement technique, a robust controller design is obtained by properly assigning a set of closed-loop desired eigenvalues to meet performance requirements. Numerical simulations for the articulated flexible manipulators are given to demonstrate the feasibility and effectiveness of the proposed position control algorithms.
Electrostrictive Graft Elastomers
NASA Technical Reports Server (NTRS)
Su, Ji (Inventor); Harrison, Joycelyn S. (Inventor); St.Clair, Terry L. (Inventor)
2003-01-01
An electrostrictive graft elastomer has a backbone molecule which is a non-crystallizable, flexible macromolecular chain and a grafted polymer forming polar graft moieties with backbone molecules. The polar graft moieties have been rotated by an applied electric field, e.g., into substantial polar alignment. The rotation is sustained until the electric field is removed. In another embodiment, a process for producing strain in an elastomer includes: (a) providing a graft elastomer having a backbone molecule which is a non-crystallizable, flexible macromolecular chain and a grafted polymer forming polar graft moieties with backbone molecules; and (b) applying an electric field to the graft elastomer to rotate the polar graft moieties, e.g., into substantial polar alignment.
Grid flexibility: The quiet revolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsieh, Eric; Anderson, Robert
The concept of flexibility describes the capability of the power system to maintain balance between generation and load under uncertainty. While the grid has historically incorporated flexibility-specific resources such as pumped hydro to complement nuclear generators, modern trends and the increased deployment of variable energy resources (VERs) are increasing the need for a transparent market value of flexibility. A review of analyses, docket filings, tariffs, and business practice manuals from the past several years finds substantial flexibility-related activity. These activities are categorized as market and financial structures; incorporation of new operations or technology; and legal or procedural reforms. The cumulativemore » outcome of these incremental changes will be a major transformation to power systems that can rapidly adapt to new needs, technologies, and conditions.« less
Grid flexibility: The quiet revolution
Hsieh, Eric; Anderson, Robert
2017-02-16
The concept of flexibility describes the capability of the power system to maintain balance between generation and load under uncertainty. While the grid has historically incorporated flexibility-specific resources such as pumped hydro to complement nuclear generators, modern trends and the increased deployment of variable energy resources (VERs) are increasing the need for a transparent market value of flexibility. A review of analyses, docket filings, tariffs, and business practice manuals from the past several years finds substantial flexibility-related activity. These activities are categorized as market and financial structures; incorporation of new operations or technology; and legal or procedural reforms. The cumulativemore » outcome of these incremental changes will be a major transformation to power systems that can rapidly adapt to new needs, technologies, and conditions.« less
Flexible configuration-interaction shell-model many-body solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.
BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.
Accounting for receptor flexibility and enhanced sampling methods in computer-aided drug design.
Sinko, William; Lindert, Steffen; McCammon, J Andrew
2013-01-01
Protein flexibility plays a major role in biomolecular recognition. In many cases, it is not obvious how molecular structure will change upon association with other molecules. In proteins, these changes can be major, with large deviations in overall backbone structure, or they can be more subtle as in a side-chain rotation. Either way the algorithms that predict the favorability of biomolecular association require relatively accurate predictions of the bound structure to give an accurate assessment of the energy involved in association. Here, we review a number of techniques that have been proposed to accommodate receptor flexibility in the simulation of small molecules binding to protein receptors. We investigate modifications to standard rigid receptor docking algorithms and also explore enhanced sampling techniques, and the combination of free energy calculations and enhanced sampling techniques. The understanding and allowance for receptor flexibility are helping to make computer simulations of ligand protein binding more accurate. These developments may help improve the efficiency of drug discovery and development. Efficiency will be essential as we begin to see personalized medicine tailored to individual patients, which means specific drugs are needed for each patient's genetic makeup. © 2012 John Wiley & Sons A/S.
NASA Astrophysics Data System (ADS)
Rahnamay Naeini, M.; Sadegh, M.; AghaKouchak, A.; Hsu, K. L.; Sorooshian, S.; Yang, T.
2017-12-01
Meta-Heuristic optimization algorithms have gained a great deal of attention in a wide variety of fields. Simplicity and flexibility of these algorithms, along with their robustness, make them attractive tools for solving optimization problems. Different optimization methods, however, hold algorithm-specific strengths and limitations. Performance of each individual algorithm obeys the "No-Free-Lunch" theorem, which means a single algorithm cannot consistently outperform all possible optimization problems over a variety of problems. From users' perspective, it is a tedious process to compare, validate, and select the best-performing algorithm for a specific problem or a set of test cases. In this study, we introduce a new hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme, and allows users to select the most suitable algorithm tailored to the problem at hand. The concept of SC-SAHEL is to execute different EAs as separate parallel search cores, and let all participating EAs to compete during the course of the search. The newly developed SC-SAHEL algorithm is designed to automatically select, the best performing algorithm for the given optimization problem. This algorithm is rigorously effective in finding the global optimum for several strenuous benchmark test functions, and computationally efficient as compared to individual EAs. We benchmark the proposed SC-SAHEL algorithm over 29 conceptual test functions, and two real-world case studies - one hydropower reservoir model and one hydrological model (SAC-SMA). Results show that the proposed framework outperforms individual EAs in an absolute majority of the test problems, and can provide competitive results to the fittest EA algorithm with more comprehensive information during the search. The proposed framework is also flexible for merging additional EAs, boundary-handling techniques, and sampling schemes, and has good potential to be used in Water-Energy system optimal operation and management.
Multi-layer service function chaining scheduling based on auxiliary graph in IP over optical network
NASA Astrophysics Data System (ADS)
Li, Yixuan; Li, Hui; Liu, Yuze; Ji, Yuefeng
2017-10-01
Software Defined Optical Network (SDON) can be considered as extension of Software Defined Network (SDN) in optical networks. SDON offers a unified control plane and makes optical network an intelligent transport network with dynamic flexibility and service adaptability. For this reason, a comprehensive optical transmission service, able to achieve service differentiation all the way down to the optical transport layer, can be provided to service function chaining (SFC). IP over optical network, as a promising networking architecture to interconnect data centers, is the most widely used scenarios of SFC. In this paper, we offer a flexible and dynamic resource allocation method for diverse SFC service requests in the IP over optical network. To do so, we firstly propose the concept of optical service function (OSF) and a multi-layer SFC model. OSF represents the comprehensive optical transmission service (e.g., multicast, low latency, quality of service, etc.), which can be achieved in multi-layer SFC model. OSF can also be considered as a special SF. Secondly, we design a resource allocation algorithm, which we call OSF-oriented optical service scheduling algorithm. It is able to address multi-layer SFC optical service scheduling and provide comprehensive optical transmission service, while meeting multiple optical transmission requirements (e.g., bandwidth, latency, availability). Moreover, the algorithm exploits the concept of Auxiliary Graph. Finally, we compare our algorithm with the Baseline algorithm in simulation. And simulation results show that our algorithm achieves superior performance than Baseline algorithm in low traffic load condition.
NASA Technical Reports Server (NTRS)
Shareef, N. H.; Amirouche, F. M. L.
1991-01-01
A computational algorithmic procedure is developed and implemented for the dynamic analysis of a multibody system with rigid/flexible interconnected bodies. The algorithm takes into consideration the large rotation/translation and small elastic deformations associated with the rigid-body degrees of freedom and the flexibility of the bodies in the system respectively. Versatile three-dimensional isoparametric brick elements are employed for the modeling of the geometric configurations of the bodies. The formulation of the recursive dynamical equations of motion is based on the recursive Kane's equations, strain energy concepts, and the techniques of component mode synthesis. In order to minimize CPU-intensive matrix multiplication operations and speed up the execution process, the concepts of indexed arrays is utilized in the formulation of the equations of motion. A spin-up maneuver of a space robot with three flexible links carrying a solar panel is used as an illustrative example.
A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.
Lee, I; Sikora, R; Shaw, M J
1997-01-01
Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling.
Mechanistic design concepts for conventional flexible pavements
NASA Astrophysics Data System (ADS)
Elliott, R. P.; Thompson, M. R.
1985-02-01
Mechanical design concepts for convetional flexible pavement (asphalt concrete (AC) surface plus granular base/subbase) for highways are proposed and validated. The procedure is based on ILLI-PAVE, a stress dependent finite element computer program, coupled with appropriate transfer functions. Two design criteria are considered: AC flexural fatigue cracking and subgrade rutting. Algorithms were developed relating pavement response parameters (stresses, strains, deflections) to AC thickness, AC moduli, granular layer thickness, and subgrade moduli. Extensive analyses of the AASHO Road Test flexible pavement data are presented supporting the validity of the proposed concepts.
An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1999-01-01
The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.
Accurate Ambient Noise Assessment Using Smartphones
Zamora, Willian; Calafate, Carlos T.; Cano, Juan-Carlos; Manzoni, Pietro
2017-01-01
Nowadays, smartphones have become ubiquitous and one of the main communication resources for human beings. Their widespread adoption was due to the huge technological progress and to the development of multiple useful applications. Their characteristics have also experienced a substantial improvement as they now integrate multiple sensors able to convert the smartphone into a flexible and multi-purpose sensing unit. The combined use of multiple smartphones endowed with several types of sensors gives the possibility to monitor a certain area with fine spatial and temporal granularity, a procedure typically known as crowdsensing. In this paper, we propose using smartphones as environmental noise-sensing units. For this purpose, we focus our study on the sound capture and processing procedure, analyzing the impact of different noise calculation algorithms, as well as in determining their accuracy when compared to a professional noise measurement unit. We analyze different candidate algorithms using different types of smartphones, and we study the most adequate time period and sampling strategy to optimize the data-gathering process. In addition, we perform an experimental study comparing our approach with the results obtained using a professional device. Experimental results show that, if the smartphone application is well tuned, it is possible to measure noise levels with a accuracy degree comparable to professional devices for the entire dynamic range typically supported by microphones embedded in smartphones, i.e., 35–95 dB. PMID:28430126
Cobweb/3: A portable implementation
NASA Technical Reports Server (NTRS)
Mckusick, Kathleen; Thompson, Kevin
1990-01-01
An algorithm is examined for data clustering and incremental concept formation. An overview is given of the Cobweb/3 system and the algorithm on which it is based, as well as the practical details of obtaining and running the system code. The implementation features a flexible user interface which includes a graphical display of the concept hierarchies that the system constructs.
Derivative Free Gradient Projection Algorithms for Rotation
ERIC Educational Resources Information Center
Jennrich, Robert I.
2004-01-01
A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…
MIA-Clustering: a novel method for segmentation of paleontological material.
Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M
2018-01-01
Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.
Inverse dynamics of adaptive structures used as space cranes
NASA Technical Reports Server (NTRS)
Das, S. K.; Utku, S.; Wada, B. K.
1990-01-01
As a precursor to the real-time control of fast moving adaptive structures used as space cranes, a formulation is given for the flexibility induced motion relative to the nominal motion (i.e., the motion that assumes no flexibility) and for obtaining the open loop time varying driving forces. An algorithm is proposed for the computation of the relative motion and driving forces. The governing equations are given in matrix form with explicit functional dependencies. A simulator is developed to implement the algorithm on a digital computer. In the formulations, the distributed mass of the crane is lumped by two schemes, vz., 'trapezoidal' lumping and 'Simpson's rule' lumping. The effects of the mass lumping schemes are shown by simulator runs.
Robustness of Flexible Systems With Component-Level Uncertainties
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.
2000-01-01
Robustness of flexible systems in the presence of model uncertainties at the component level is considered. Specifically, an approach for formulating robustness of flexible systems in the presence of frequency and damping uncertainties at the component level is presented. The synthesis of the components is based on a modifications of a controls-based algorithm for component mode synthesis. The formulation deals first with robustness of synthesized flexible systems. It is then extended to deal with global (non-synthesized ) dynamic models with component-level uncertainties by projecting uncertainties from component levels to system level. A numerical example involving a two-dimensional simulated docking problem is worked out to demonstrate the feasibility of the proposed approach.
Insertion algorithms for network model database management systems
NASA Astrophysics Data System (ADS)
Mamadolimov, Abdurashid; Khikmat, Saburov
2017-12-01
The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.
Implementing a GPU-based numerical algorithm for modelling dynamics of a high-speed train
NASA Astrophysics Data System (ADS)
Sytov, E. S.; Bratus, A. S.; Yurchenko, D.
2018-04-01
This paper discusses the initiative of implementing a GPU-based numerical algorithm for studying various phenomena associated with dynamics of a high-speed railway transport. The proposed numerical algorithm for calculating a critical speed of the bogie is based on the first Lyapunov number. Numerical algorithm is validated by analytical results, derived for a simple model. A dynamic model of a carriage connected to a new dual-wheelset flexible bogie is studied for linear and dry friction damping. Numerical results obtained by CPU, MPU and GPU approaches are compared and appropriateness of these methods is discussed.
NASA Astrophysics Data System (ADS)
Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit
2018-03-01
The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm
Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
NASA Astrophysics Data System (ADS)
Lv, Chen; Zhang, Junzhi; Li, Yutong
2014-11-01
Because of the damping and elastic properties of an electrified powertrain, the regenerative brake of an electric vehicle (EV) is very different from a conventional friction brake with respect to the system dynamics. The flexibility of an electric drivetrain would have a negative effect on the blended brake control performance. In this study, models of the powertrain system of an electric car equipped with an axle motor are developed. Based on these models, the transfer characteristics of the motor torque in the driveline and its effect on blended braking control performance are analysed. To further enhance a vehicle's brake performance and energy efficiency, blended braking control algorithms with compensation for the powertrain flexibility are proposed using an extended Kalman filter. These algorithms are simulated under normal deceleration braking. The results show that the brake performance and blended braking control accuracy of the vehicle are significantly enhanced by the newly proposed algorithms.
Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin
1994-01-01
The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.
NASA Astrophysics Data System (ADS)
Chen, Chung-Hao; Yao, Yi; Chang, Hong; Koschan, Andreas; Abidi, Mongi
2013-06-01
Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computerbased face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially reduced computational complexity and improved flexibility at the cost of slightly decreased pixel accuracy as compared to Chen and Wang's method [18].
Generation of Referring Expressions: Assessing the Incremental Algorithm
ERIC Educational Resources Information Center
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard
2012-01-01
A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…
NASA Astrophysics Data System (ADS)
Qiu, Zhi-cheng; Wang, Bin; Zhang, Xian-min; Han, Jian-da
2013-04-01
This study presents a novel translating piezoelectric flexible manipulator driven by a rodless cylinder. Simultaneous positioning control and vibration suppression of the flexible manipulator is accomplished by using a hybrid driving scheme composed of the pneumatic cylinder and a piezoelectric actuator. Pulse code modulation (PCM) method is utilized for the cylinder. First, the system dynamics model is derived, and its standard multiple input multiple output (MIMO) state-space representation is provided. Second, a composite proportional derivative (PD) control algorithms and a direct adaptive fuzzy control method are designed for the MIMO system. Also, a time delay compensation algorithm, bandstop and low-pass filters are utilized, under consideration of the control hysteresis and the caused high-frequency modal vibration due to the long stroke of the cylinder, gas compression and nonlinear factors of the pneumatic system. The convergence of the closed loop system is analyzed. Finally, experimental apparatus is constructed and experiments are conducted. The effectiveness of the designed controllers and the hybrid driving scheme is verified through simulation and experimental comparison studies. The numerical simulation and experimental results demonstrate that the proposed system scheme of employing the pneumatic drive and piezoelectric actuator can suppress the vibration and achieve the desired positioning location simultaneously. Furthermore, the adopted adaptive fuzzy control algorithms can significantly enhance the control performance.
SOFIA: a flexible source finder for 3D spectral line data
NASA Astrophysics Data System (ADS)
Serra, Paolo; Westmeier, Tobias; Giese, Nadine; Jurek, Russell; Flöer, Lars; Popping, Attila; Winkel, Benjamin; van der Hulst, Thijs; Meyer, Martin; Koribalski, Bärbel S.; Staveley-Smith, Lister; Courtois, Hélène
2015-04-01
We introduce SOFIA, a flexible software application for the detection and parametrization of sources in 3D spectral line data sets. SOFIA combines for the first time in a single piece of software a set of new source-finding and parametrization algorithms developed on the way to future H I surveys with ASKAP (WALLABY, DINGO) and APERTIF. It is designed to enable the general use of these new algorithms by the community on a broad range of data sets. The key advantages of SOFIA are the ability to: search for line emission on multiple scales to detect 3D sources in a complete and reliable way, taking into account noise level variations and the presence of artefacts in a data cube; estimate the reliability of individual detections; look for signal in arbitrarily large data cubes using a catalogue of 3D coordinates as a prior; provide a wide range of source parameters and output products which facilitate further analysis by the user. We highlight the modularity of SOFIA, which makes it a flexible package allowing users to select and apply only the algorithms useful for their data and science questions. This modularity makes it also possible to easily expand SOFIA in order to include additional methods as they become available. The full SOFIA distribution, including a dedicated graphical user interface, is publicly available for download.
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith
2014-08-25
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less
The InSAR Scientific Computing Environment
NASA Technical Reports Server (NTRS)
Rosen, Paul A.; Gurrola, Eric; Sacco, Gian Franco; Zebker, Howard
2012-01-01
We have developed a flexible and extensible Interferometric SAR (InSAR) Scientific Computing Environment (ISCE) for geodetic image processing. ISCE was designed from the ground up as a geophysics community tool for generating stacks of interferograms that lend themselves to various forms of time-series analysis, with attention paid to accuracy, extensibility, and modularity. The framework is python-based, with code elements rigorously componentized by separating input/output operations from the processing engines. This allows greater flexibility and extensibility in the data models, and creates algorithmic code that is less susceptible to unnecessary modification when new data types and sensors are available. In addition, the components support provenance and checkpointing to facilitate reprocessing and algorithm exploration. The algorithms, based on legacy processing codes, have been adapted to assume a common reference track approach for all images acquired from nearby orbits, simplifying and systematizing the geometry for time-series analysis. The framework is designed to easily allow user contributions, and is distributed for free use by researchers. ISCE can process data from the ALOS, ERS, EnviSAT, Cosmo-SkyMed, RadarSAT-1, RadarSAT-2, and TerraSAR-X platforms, starting from Level-0 or Level 1 as provided from the data source, and going as far as Level 3 geocoded deformation products. With its flexible design, it can be extended with raw/meta data parsers to enable it to work with radar data from other platforms
NASA Astrophysics Data System (ADS)
Lee, Kyu J.; Kunii, T. L.; Noma, T.
1993-01-01
In this paper, we propose a syntactic pattern recognition method for non-schematic drawings, based on a new attributed graph grammar with flexible embedding. In our graph grammar, the embedding rule permits the nodes of a guest graph to be arbitrarily connected with the nodes of a host graph. The ambiguity caused by this flexible embedding is controlled with the evaluation of synthesized attributes and the check of context sensitivity. To integrate parsing with the synthesized attribute evaluation and the context sensitivity check, we also develop a bottom up parsing algorithm.
Flexible Modeling of Latent Task Structures in Multitask Learning
2012-06-26
Flexible Modeling of Latent Task Structures in Multitask Learning Alexandre Passos† apassos@cs.umass.edu Computer Science Department, University of...of Maryland, College Park, MD USA Abstract Multitask learning algorithms are typically designed assuming some fixed, a priori known latent structure...shared by all the tasks. However, it is usually unclear what type of latent task structure is the most ap- propriate for a given multitask learning prob
NASA Technical Reports Server (NTRS)
Schoppers, Marcel
1994-01-01
The design of a flexible, real-time software architecture for trajectory planning and automatic control of redundant manipulators is described. Emphasis is placed on a technique of designing control systems that are both flexible and robust yet have good real-time performance. The solution presented involves an artificial intelligence algorithm that dynamically reprograms the real-time control system while planning system behavior.
Elschner, Robert; Frey, Felix; Meuer, Christian; Fischer, Johannes Karl; Alreesh, Saleem; Schmidt-Langhorst, Carsten; Molle, Lutz; Tanimura, Takahito; Schubert, Colja
2012-12-17
We experimentally demonstrate the use of data-aided digital signal processing for format-flexible coherent reception of different 28-GBd PDM and 4D modulated signals in WDM transmission experiments over up to 7680 km SSMF by using the same resource-efficient digital signal processing algorithms for the equalization of all formats. Stable and regular performance in the nonlinear transmission regime is confirmed.
Fatigue Crack Length Sizing Using a Novel Flexible Eddy Current Sensor Array.
Xie, Ruifang; Chen, Dixiang; Pan, Mengchun; Tian, Wugang; Wu, Xuezhong; Zhou, Weihong; Tang, Ying
2015-12-21
The eddy current probe, which is flexible, array typed, highly sensitive and capable of quantitative inspection is one practical requirement in nondestructive testing and also a research hotspot. A novel flexible planar eddy current sensor array for the inspection of microcrack presentation in critical parts of airplanes is developed in this paper. Both exciting and sensing coils are etched on polyimide films using a flexible printed circuit board technique, thus conforming the sensor to complex geometric structures. In order to serve the needs of condition-based maintenance (CBM), the proposed sensor array is comprised of 64 elements. Its spatial resolution is only 0.8 mm, and it is not only sensitive to shallow microcracks, but also capable of sizing the length of fatigue cracks. The details and advantages of our sensor design are introduced. The working principal and the crack responses are analyzed by finite element simulation, with which a crack length sizing algorithm is proposed. Experiments based on standard specimens are implemented to verify the validity of our simulation and the efficiency of the crack length sizing algorithm. Experimental results show that the sensor array is sensitive to microcracks, and is capable of crack length sizing with an accuracy within ±0.2 mm.
Fatigue Crack Length Sizing Using a Novel Flexible Eddy Current Sensor Array
Xie, Ruifang; Chen, Dixiang; Pan, Mengchun; Tian, Wugang; Wu, Xuezhong; Zhou, Weihong; Tang, Ying
2015-01-01
The eddy current probe, which is flexible, array typed, highly sensitive and capable of quantitative inspection is one practical requirement in nondestructive testing and also a research hotspot. A novel flexible planar eddy current sensor array for the inspection of microcrack presentation in critical parts of airplanes is developed in this paper. Both exciting and sensing coils are etched on polyimide films using a flexible printed circuit board technique, thus conforming the sensor to complex geometric structures. In order to serve the needs of condition-based maintenance (CBM), the proposed sensor array is comprised of 64 elements. Its spatial resolution is only 0.8 mm, and it is not only sensitive to shallow microcracks, but also capable of sizing the length of fatigue cracks. The details and advantages of our sensor design are introduced. The working principal and the crack responses are analyzed by finite element simulation, with which a crack length sizing algorithm is proposed. Experiments based on standard specimens are implemented to verify the validity of our simulation and the efficiency of the crack length sizing algorithm. Experimental results show that the sensor array is sensitive to microcracks, and is capable of crack length sizing with an accuracy within ±0.2 mm. PMID:26703608
Transient dynamics of a flexible rotor with squeeze film dampers
NASA Technical Reports Server (NTRS)
Buono, D. F.; Schlitzer, L. D.; Hall, R. G., III; Hibner, D. H.
1978-01-01
A series of simulated blade loss tests are reported on a test rotor designed to operate above its second bending critical speed. A series of analyses were performed which predicted the transient behavior of the test rig for each of the blade loss tests. The scope of the program included the investigation of transient rotor dynamics of a flexible rotor system, similar to modern flexible jet engine rotors, both with and without squeeze film dampers. The results substantiate the effectiveness of squeeze film dampers and document the ability of available analytical methods to predict their effectiveness and behavior.
ERIC Educational Resources Information Center
Li, Jennifer
2012-01-01
In 2009-2010, California made substantial education budget cuts at the same time that it removed its spending requirements from $4.5 billion of state money. This gave districts the flexibility to use the funds in any manner approved by the local school board. Researchers found that most of the formerly earmarked money was moved into general funds…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Harding, Lee T.
Isotope identification algorithms that are contained in the Gamma Detector Response and Analysis Software (GADRAS) can be used for real-time stationary measurement and search applications on platforms operating under Linux or Android operating sys-tems. Since the background radiation can vary considerably due to variations in natu-rally-occurring radioactive materials (NORM), spectral algorithms can be substantial-ly more sensitive to threat materials than search algorithms based strictly on count rate. Specific isotopes or interest can be designated for the search algorithm, which permits suppression of alarms for non-threatening sources, such as such as medical radionuclides. The same isotope identification algorithms that are usedmore » for search ap-plications can also be used to process static measurements. The isotope identification algorithms follow the same protocols as those used by the Windows version of GADRAS, so files that are created under the Windows interface can be copied direct-ly to processors on fielded sensors. The analysis algorithms contain provisions for gain adjustment and energy lineariza-tion, which enables direct processing of spectra as they are recorded by multichannel analyzers. Gain compensation is performed by utilizing photopeaks in background spectra. Incorporation of this energy calibration tasks into the analysis algorithm also eliminates one of the more difficult challenges associated with development of radia-tion detection equipment.« less
Neural self-tuning adaptive control of non-minimum phase system
NASA Technical Reports Server (NTRS)
Ho, Long T.; Bialasiewicz, Jan T.; Ho, Hai T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity, if not unstable, closed-loop behavior. Therefore, a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
A complexity-scalable software-based MPEG-2 video encoder.
Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin
2004-05-01
With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
Bach, Peter M; McCarthy, David T; Urich, Christian; Sitzenfrei, Robert; Kleidorfer, Manfred; Rauch, Wolfgang; Deletic, Ana
2013-01-01
With global change bringing about greater challenges for the resilient planning and management of urban water infrastructure, research has been invested in the development of a strategic planning tool, DAnCE4Water. The tool models how urban and societal changes impact the development of centralised and decentralised (distributed) water infrastructure. An algorithm for rigorous assessment of suitable decentralised stormwater management options in the model is presented and tested on a local Melbourne catchment. Following detailed spatial representation algorithms (defined by planning rules), the model assesses numerous stormwater options to meet water quality targets at a variety of spatial scales. A multi-criteria assessment algorithm is used to find top-ranking solutions (which meet a specific treatment performance for a user-defined percentage of catchment imperviousness). A toolbox of five stormwater technologies (infiltration systems, surface wetlands, bioretention systems, ponds and swales) is featured. Parameters that set the algorithm's flexibility to develop possible management options are assessed and evaluated. Results are expressed in terms of 'utilisation', which characterises the frequency of use of different technologies across the top-ranking options (bioretention being the most versatile). Initial results highlight the importance of selecting a suitable spatial resolution and providing the model with enough flexibility for coming up with different technology combinations. The generic nature of the model enables its application to other urban areas (e.g. different catchments, local municipal regions or entire cities).
A flexible algorithm for calculating pair interactions on SIMD architectures
NASA Astrophysics Data System (ADS)
Páll, Szilárd; Hess, Berk
2013-12-01
Calculating interactions or correlations between pairs of particles is typically the most time-consuming task in particle simulation or correlation analysis. Straightforward implementations using a double loop over particle pairs have traditionally worked well, especially since compilers usually do a good job of unrolling the inner loop. In order to reach high performance on modern CPU and accelerator architectures, single-instruction multiple-data (SIMD) parallelization has become essential. Avoiding memory bottlenecks is also increasingly important and requires reducing the ratio of memory to arithmetic operations. Moreover, when pairs only interact within a certain cut-off distance, good SIMD utilization can only be achieved by reordering input and output data, which quickly becomes a limiting factor. Here we present an algorithm for SIMD parallelization based on grouping a fixed number of particles, e.g. 2, 4, or 8, into spatial clusters. Calculating all interactions between particles in a pair of such clusters improves data reuse compared to the traditional scheme and results in a more efficient SIMD parallelization. Adjusting the cluster size allows the algorithm to map to SIMD units of various widths. This flexibility not only enables fast and efficient implementation on current CPUs and accelerator architectures like GPUs or Intel MIC, but it also makes the algorithm future-proof. We present the algorithm with an application to molecular dynamics simulations, where we can also make use of the effective buffering the method introduces.
Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Sanchez, Javier; Revie, Crawford W.
2013-01-01
Background Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. Methods This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. Results The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. Conclusion The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes. PMID:24349216
Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Sanchez, Javier; Revie, Crawford W
2013-01-01
Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes.
Flexible Multi agent Algorithm for Distributed Decision Making
2015-01-01
How, J. P. Consensus - Based Auction Approaches for Decentralized task Assignment. Proceedings of the AIAA Guidance, Navigation, and Control...G. ; Kim, Y. Market- based Decentralized Task Assignment for Cooperative UA V Mission Including Rendezvous. Proceedings of the AIAA Guidance...scalable and adaptable to a variety of specific mission tasks . Additionally, the algorithm could easily be adapted for use on land or sea- based systems
Huang, Ping-Tzan; Jong, Tai-Lang; Li, Chien-Ming; Chen, Wei-Ling; Lin, Chia-Hung
2017-08-01
Blood leakage and blood loss are serious complications during hemodialysis. From the hemodialysis survey reports, these life-threatening events occur to attract nephrology nurses and patients themselves. When the venous needle and blood line are disconnected, it takes only a few minutes for an adult patient to lose over 40% of his / her blood, which is a sufficient amount of blood loss to cause the patient to die. Therefore, we propose integrating a flexible sensor and self-organizing algorithm to design a cloud computing-based warning device for blood leakage detection. The flexible sensor is fabricated via a screen-printing technique using metallic materials on a soft substrate in an array configuration. The self-organizing algorithm constructs a virtual direct current grid-based alarm unit in an embedded system. This warning device is employed to identify blood leakage levels via a wireless network and cloud computing. It has been validated experimentally, and the experimental results suggest specifications for its commercial designs. The proposed model can also be implemented in an embedded system.
Real-time dynamics and control strategies for space operations of flexible structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Alvin, K. F.; Alexander, S.
1993-01-01
This project (NAG9-574) was meant to be a three-year research project. However, due to NASA's reorganizations during 1992, the project was funded only for one year. Accordingly, every effort was made to make the present final report as if the project was meant to be for one-year duration. Originally, during the first year we were planning to accomplish the following: we were to start with a three dimensional flexible manipulator beam with articulated joints and with a linear control-based controller applied at the joints; using this simple example, we were to design the software systems requirements for real-time processing, introduce the streamlining of various computational algorithms, perform the necessary reorganization of the partitioned simulation procedures, and assess the potential speed-up realization of the solution process by parallel computations. The three reports included as part of the final report address: the streamlining of various computational algorithms; the necessary reorganization of the partitioned simulation procedures, in particular the observer models; and an initial attempt of reconfiguring the flexible space structures.
The dynamics and control of large flexible space structures-V
NASA Technical Reports Server (NTRS)
Bainum, P. M.; Reddy, A. S. S. R.; Diarra, C. M.; Kumar, V. K.
1982-01-01
A general survey of the progress made in the areas of mathematical modelling of the system dynamics, structural analysis, development of control algorithms, and simulation of environmental disturbances is presented. The use of graph theory techniques is employed to examine the effects of inherent damping associated with LSST systems on the number and locations of the required control actuators. A mathematical model of the forces and moments induced on a flexible orbiting beam due to solar radiation pressure is developed and typical steady state open loop responses obtained for the case when rotations and vibrations are limited to occur within the orbit plane. A preliminary controls analysis based on a truncated (13 mode) finite element model of the 122m. Hoop/Column antenna indicates that a minimum of six appropriately placed actuators is required for controllability. An algorithm to evaluate the coefficients which describe coupling between the rigid rotational and flexible modes and also intramodal coupling was developed and numerical evaluation based on the finite element model of Hoop/Column system is currently in progress.
Attitude tracking control of flexible spacecraft with large amplitude slosh
NASA Astrophysics Data System (ADS)
Deng, Mingle; Yue, Baozeng
2017-12-01
This paper is focused on attitude tracking control of a spacecraft that is equipped with flexible appendage and partially filled liquid propellant tank. The large amplitude liquid slosh is included by using a moving pulsating ball model that is further improved to estimate the settling location of liquid in microgravity or a zero-g environment. The flexible appendage is modelled as a three-dimensional Bernoulli-Euler beam, and the assumed modal method is employed. A hybrid controller that combines sliding mode control with an adaptive algorithm is designed for spacecraft to perform attitude tracking. The proposed controller has proved to be asymptotically stable. A nonlinear model for the overall coupled system including spacecraft attitude dynamics, liquid slosh, structural vibration and control action is established. Numerical simulation results are presented to show the dynamic behaviors of the coupled system and to verify the effectiveness of the control approach when the spacecraft undergoes the disturbance produced by large amplitude slosh and appendage vibration. Lastly, the designed adaptive algorithm is found to be effective to improve the precision of attitude tracking.
The upwind control volume scheme for unstructured triangular grids
NASA Technical Reports Server (NTRS)
Giles, Michael; Anderson, W. Kyle; Roberts, Thomas W.
1989-01-01
A new algorithm for the numerical solution of the Euler equations is presented. This algorithm is particularly suited to the use of unstructured triangular meshes, allowing geometric flexibility. Solutions are second-order accurate in the steady state. Implementation of the algorithm requires minimal grid connectivity information, resulting in modest storage requirements, and should enhance the implementation of the scheme on massively parallel computers. A novel form of upwind differencing is developed, and is shown to yield sharp resolution of shocks. Two new artificial viscosity models are introduced that enhance the performance of the new scheme. Numerical results for transonic airfoil flows are presented, which demonstrate the performance of the algorithm.
Online clustering algorithms for radar emitter classification.
Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max
2005-08-01
Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.
Modified kernel-based nonlinear feature extraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J.; Perkins, S. J.; Theiler, J. P.
2002-01-01
Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determinedmore » by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.« less
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.
Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.
Leveraging Python Interoperability Tools to Improve Sapphire's Usability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gezahegne, A; Love, N S
2007-12-10
The Sapphire project at the Center for Applied Scientific Computing (CASC) develops and applies an extensive set of data mining algorithms for the analysis of large data sets. Sapphire's algorithms are currently available as a set of C++ libraries. However many users prefer higher level scripting languages such as Python for their ease of use and flexibility. In this report, we evaluate four interoperability tools for the purpose of wrapping Sapphire's core functionality with Python. Exposing Sapphire's functionality through a Python interface would increase its usability and connect its algorithms to existing Python tools.
An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158
An adaptive evolutionary algorithm for traveling salesman problem with precedence constraints.
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments.
Knowledge-based low-level image analysis for computer vision systems
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.
1988-01-01
Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.
Next-Generation Single-Use Ureteroscopes: An In Vitro Comparison.
Tom, Westin R; Wollin, Daniel A; Jiang, Ruiyang; Radvak, Daniela; Simmons, Walter Neal; Preminger, Glenn M; Lipkin, Michael E
2017-12-01
Single-use ureteroscopes have been gaining popularity in recent years. We compare the optics, deflection, and irrigation flow of two novel single-use flexible ureteroscopes-the YC-FR-A and the NeoFlex-with contemporary reusable and single-use flexible ureteroscopes. Five flexible ureteroscopes, YC-FR-A (YouCare Tech, China), NeoFlex (Neoscope, Inc., USA), LithoVue (Boston Scientific, USA), Flex-Xc (Karl Storz, Germany), and Cobra (Richard Wolf, Germany), were assessed in vitro for image resolution, distortion, field of view, depth of field, color representation, and grayscale imaging. Ureteroscope deflection and irrigation were also compared. The YC-FR-A showed a resolution of 5.04 lines/mm and 4.3% image distortion. NeoFlex showed a resolution of 17.9 lines/mm and 14.0% image distortion. No substantial difference was demonstrated regarding the other optic characteristics between the two. Across all tested ureteroscopes, single-use or reusable, the digital scopes performed best with regard to optics. The YC-FR-A had the greatest deflection at baseline, but lacks two-way deflection. The NeoFlex had comparable deflection at baseline to reusable devices. Both ureteroscopes had substantial loss of deflection with instruments in the working channel. The YC-FR-A had the greatest irrigation rate. The NeoFlex has comparable irrigation to contemporary ureteroscopes. The YouCare single-use fiberoptic flexible ureteroscope and NeoFlex single-use digital flexible ureteroscope perform comparably to current reusable ureteroscopes, possibly making each a viable alternative in the future. Newer YouCare single-use flexible ureteroscopes with a digital platform and two-way deflection may be more competitive, while the NeoFlex devices are undergoing rapid improvement as well. Further testing is necessary to validate the clinical performance and utility of these ureteroscopes, given the wide variety of single-use devices under development.
Shape accuracy optimization for cable-rib tension deployable antenna structure with tensioned cables
NASA Astrophysics Data System (ADS)
Liu, Ruiwei; Guo, Hongwei; Liu, Rongqiang; Wang, Hongxiang; Tang, Dewei; Song, Xiaoke
2017-11-01
Shape accuracy is of substantial importance in deployable structures as the demand for large-scale deployable structures in various fields, especially in aerospace engineering, increases. The main purpose of this paper is to present a shape accuracy optimization method to find the optimal pretensions for the desired shape of cable-rib tension deployable antenna structure with tensioned cables. First, an analysis model of the deployable structure is established by using finite element method. In this model, geometrical nonlinearity is considered for the cable element and beam element. Flexible deformations of the deployable structure under the action of cable network and tensioned cables are subsequently analyzed separately. Moreover, the influence of pretension of tensioned cables on natural frequencies is studied. Based on the results, a genetic algorithm is used to find a set of reasonable pretension and thus minimize structural deformation under the first natural frequency constraint. Finally, numerical simulations are presented to analyze the deployable structure under two kinds of constraints. Results show that the shape accuracy and natural frequencies of deployable structure can be effectively improved by pretension optimization.
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Smith, Austin; Oliver, T. Emerson
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.
Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data.
Barros, Rodrigo C; Winck, Ana T; Machado, Karina S; Basgalupp, Márcio P; de Carvalho, André C P L F; Ruiz, Duncan D; de Souza, Osmar Norberto
2012-11-21
This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.
Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data
2012-01-01
Background This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor. PMID:23171000
Embedded algorithms within an FPGA-based system to process nonlinear time series data
NASA Astrophysics Data System (ADS)
Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.
2008-03-01
This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better computational and power efficiency.
Planning and Execution: The Spirit of Opportunity for Robust Autonomous Systems
NASA Technical Reports Server (NTRS)
Muscettola, Nicola
2004-01-01
One of the most exciting endeavors pursued by human kind is the search for life in the Solar System and the Universe at large. NASA is leading this effort by designing, deploying and operating robotic systems that will reach planets, planet moons, asteroids and comets searching for water, organic building blocks and signs of past or present microbial life. None of these missions will be achievable without substantial advances in.the design, implementation and validation of autonomous control agents. These agents must be capable of robustly controlling a robotic explorer in a hostile environment with very limited or no communication with Earth. The talk focuses on work pursued at the NASA Ames Research center ranging from basic research on algorithm to deployed mission support systems. We will start by discussing how planning and scheduling technology derived from the Remote Agent experiment is being used daily in the operations of the Spirit and Opportunity rovers. Planning and scheduling is also used as the fundamental paradigm at the core of our research in real-time autonomous agents. In particular, we will describe our efforts in the Intelligent Distributed Execution Architecture (IDEA), a multi-agent real-time architecture that exploits artificial intelligence planning as the core reasoning engine of an autonomous agent. We will also describe how the issue of plan robustness at execution can be addressed by novel constraint propagation algorithms capable of giving the tightest exact bounds on resource consumption or all possible executions of a flexible plan.
Control of nonlinear flexible space structures
NASA Astrophysics Data System (ADS)
Shi, Jianjun
With the advances made in computer technology and efficiency of numerical algorithms over last decade, the MPC strategies have become quite popular among control community. However, application of MPC or GPC to flexible space structure control has not been explored adequately in the literature. The work presented in this thesis primarily focuses on application of GPC to control of nonlinear flexible space structures. This thesis is particularly devoted to the development of various approximate dynamic models, design and assessment of candidate controllers, and extensive numerical simulations for a realistic multibody flexible spacecraft, namely, Jupiter Icy Moons Orbiter (JIMO)---a Prometheus class of spacecraft proposed by NASA for deep space exploratory missions. A stable GPC algorithm is developed for Multi-Input-Multi-Output (MIMO) systems. An end-point weighting (penalty) is used in the GPC cost function to guarantee the nominal stability of the closed-loop system. A method is given to compute the desired end-point state from the desired output trajectory. The methodologies based on Fake Algebraic Riccati Equation (FARE) and constrained nonlinear optimization, are developed for synthesis of state weighting matrix. This makes this formulation more practical. A stable reconfigurable GPC architecture is presented and its effectiveness is demonstrated on both aircraft as well as spacecraft model. A representative in-orbit maneuver is used for assessing the performance of various control strategies using various design models. Different approximate dynamic models used for analysis include linear single body flexible structure, nonlinear single body flexible structure, and nonlinear multibody flexible structure. The control laws evaluated include traditional GPC, feedback linearization-based GPC (FLGPC), reconfigurable GPC, and nonlinear dissipative control. These various control schemes are evaluated for robust stability and robust performance in the presence of parametric uncertainties and input disturbances. Finally, the conclusions are made with regard to the efficacy of these controllers and potential directions for future research.
Unsymmetric Lanczos model reduction and linear state function observer for flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1991-01-01
This report summarizes part of the research work accomplished during the second year of a two-year grant. The research, entitled 'Application of Lanczos Vectors to Control Design of Flexible Structures' concerns various ways to use Lanczos vectors and Krylov vectors to obtain reduced-order mathematical models for use in the dynamic response analyses and in control design studies. This report presents a one-sided, unsymmetric block Lanczos algorithm for model reduction of structural dynamics systems with unsymmetric damping matrix, and a control design procedure based on the theory of linear state function observers to design low-order controllers for flexible structures.
Attitude control system testing on SCOLE
NASA Technical Reports Server (NTRS)
Shenhar, J.; Sparks, D., Jr.; Williams, J. P.; Montgomery, R. C.
1988-01-01
This paper presents implementation of two control policies on SCOLE (Space Control Laboratory Experiment), a laboratory apparatus representing an offset-feed antenna attached to the Space Shuttle by a flexible mast. In the first case, the flexible mast was restrained by cables, permitting modeling of SCOLE as a rigid-body. Starting from an arbitrary state, SCOLE was maneuvered to a specified terminal state using rigid-body minimum-time control law. In the second case, the so called single step optimal control (SSOC) theory is applied to suppress vibrations of the flexible mast mounted as a cantilever beam. Based on the SSOC theory, two parameter optimization algorithms were developed.
NASA Technical Reports Server (NTRS)
Lee, Soo Han
1988-01-01
The efficiency and positional accuracy of a lightweight flexible manipulator are limited by its flexural vibrations, which last after a gross motion is completed. The vibration delays subsequent operations. In the proposed work, the vibration is suppressed by inertial force of a small arm in addition to the joint actuators and passive damping treatment. The proposed approach is: (1) Dynamic modeling of a combined system, a large flexible manipulator and a small arm, (2) Determination of optimal sensor location and controller algorithm, and (3) Verification of the fitness of model and the performance of controller.
High speed, precision motion strategies for lightweight structures
NASA Technical Reports Server (NTRS)
Book, Wayne J.
1989-01-01
Research on space telerobotics is summarized. Adaptive control experiments on the Robotic Arm, Large and Flexible (RALF) were preformed and are documented, along with a joint controller design for the Small Articulated Manipulator (SAM), which is mounted on the RALF. A control algorithm is described as a robust decentralized adaptive control based on a bounded uncertainty approach. Dynamic interactions between SAM and RALF are examined. Unstability of the manipulator is studied from the perspective that the inertial forces generated could actually be used to more rapidly damp out the flexible manipulator's vibration. Currently being studied is the modeling of the constrained dynamics of flexible arms.
Carbon Nanotube Flexible and Stretchable Electronics
NASA Astrophysics Data System (ADS)
Cai, Le; Wang, Chuan
2015-08-01
The low-cost and large-area manufacturing of flexible and stretchable electronics using printing processes could radically change people's perspectives on electronics and substantially expand the spectrum of potential applications. Examples range from personalized wearable electronics to large-area smart wallpapers and from interactive bio-inspired robots to implantable health/medical apparatus. Owing to its one-dimensional structure and superior electrical property, carbon nanotube is one of the most promising material platforms for flexible and stretchable electronics. Here in this paper, we review the recent progress in this field. Applications of single-wall carbon nanotube networks as channel semiconductor in flexible thin-film transistors and integrated circuits, as stretchable conductors in various sensors, and as channel material in stretchable transistors will be discussed. Lastly, state-of-the-art advancement on printing process, which is ideal for large-scale fabrication of flexible and stretchable electronics, will also be reviewed in detail.
Carbon Nanotube Flexible and Stretchable Electronics.
Cai, Le; Wang, Chuan
2015-12-01
The low-cost and large-area manufacturing of flexible and stretchable electronics using printing processes could radically change people's perspectives on electronics and substantially expand the spectrum of potential applications. Examples range from personalized wearable electronics to large-area smart wallpapers and from interactive bio-inspired robots to implantable health/medical apparatus. Owing to its one-dimensional structure and superior electrical property, carbon nanotube is one of the most promising material platforms for flexible and stretchable electronics. Here in this paper, we review the recent progress in this field. Applications of single-wall carbon nanotube networks as channel semiconductor in flexible thin-film transistors and integrated circuits, as stretchable conductors in various sensors, and as channel material in stretchable transistors will be discussed. Lastly, state-of-the-art advancement on printing process, which is ideal for large-scale fabrication of flexible and stretchable electronics, will also be reviewed in detail.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
An improved semi-implicit method for structural dynamics analysis
NASA Technical Reports Server (NTRS)
Park, K. C.
1982-01-01
A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.
Spiral trajectory design: a flexible numerical algorithm and base analytical equations.
Pipe, James G; Zwart, Nicholas R
2014-01-01
Spiral-based trajectories for magnetic resonance imaging can be advantageous, but are often cumbersome to design or create. This work presents a flexible numerical algorithm for designing trajectories based on explicit definition of radial undersampling, and also gives several analytical expressions for charactering the base (critically sampled) class of these trajectories. Expressions for the gradient waveform, based on slew and amplitude limits, are developed such that a desired pitch in the spiral k-space trajectory is followed. The source code for this algorithm, written in C, is publicly available. Analytical expressions approximating the spiral trajectory (ignoring the radial component) are given to characterize measurement time, gradient heating, maximum gradient amplitude, and off-resonance phase for slew-limited and gradient amplitude-limited cases. Several numerically calculated trajectories are illustrated, and base Archimedean spirals are compared with analytically obtained results. Several different waveforms illustrate that the desired slew and amplitude limits are reached, as are the desired undersampling patterns, using the numerical method. For base Archimedean spirals, the results of the numerical and analytical approaches are in good agreement. A versatile numerical algorithm was developed, and was written in publicly available code. Approximate analytical formulas are given that help characterize spiral trajectories. Copyright © 2013 Wiley Periodicals, Inc.
Flexible low-power RF nanoelectronics in the GHz regime using CVD MoS2
NASA Astrophysics Data System (ADS)
Yogeesh, Maruthi
Two-dimensional (2D) materials have attracted substantial interest for flexible nanoelectronics due to the overall device mechanical flexibility and thickness scalability for high mechanical performance and low operating power. In this work, we demonstrate the first MoS2 RF transistors on flexible substrates based on CVD-grown monolayers, featuring record GHz cutoff frequency (5.6 GHz) and saturation velocity (~1.8×106 cm/s), which is significantly superior to contemporary organic and metal oxide thin-film transistors. Furthermore, multicycle three-point bending results demonstrated the electrical robustness of our flexible MoS2 transistors after 10,000 cycles of mechanical bending. Additionally, basic RF communication circuit blocks such as amplifier, mixer and wireless AM receiver have been demonstrated. These collective results indicate that MoS2 is an ideal advanced semiconducting material for low-power, RF devices for large-area flexible nanoelectronics and smart nanosystems owing to its unique combination of large bandgap, high saturation velocity and high mechanical strength.
320-nm Flexible Solution-Processed 2,7-dioctyl[1] benzothieno[3,2-b]benzothiophene Transistors.
Ren, Hang; Tang, Qingxin; Tong, Yanhong; Liu, Yichun
2017-08-09
Flexible organic thin-film transistors (OTFTs) have received extensive attention due to their outstanding advantages such as light weight, low cost, flexibility, large-area fabrication, and compatibility with solution-processed techniques. However, compared with a rigid substrate, it still remains a challenge to obtain good device performance by directly depositing solution-processed organic semiconductors onto an ultrathin plastic substrate. In this work, ultrathin flexible OTFTs are successfully fabricated based on spin-coated 2,7-dioctyl[1]benzothieno[3,2-b]benzothiophene (C8-BTBT) films. The resulting device thickness is only ~320 nm, so the device has the ability to adhere well to a three-dimension curved surface. The ultrathin C8-BTBT OTFTs exhibit a mobility as high as 4.36 cm² V -1 s -1 and an on/off current ratio of over 10⁶. These results indicate the substantial promise of our ultrathin flexible C8-BTBT OTFTs for next-generation flexible and conformal electronic devices.
320-nm Flexible Solution-Processed 2,7-dioctyl[1] benzothieno[3,2-b]benzothiophene Transistors
Ren, Hang; Tang, Qingxin; Tong, Yanhong; Liu, Yichun
2017-01-01
Flexible organic thin-film transistors (OTFTs) have received extensive attention due to their outstanding advantages such as light weight, low cost, flexibility, large-area fabrication, and compatibility with solution-processed techniques. However, compared with a rigid substrate, it still remains a challenge to obtain good device performance by directly depositing solution-processed organic semiconductors onto an ultrathin plastic substrate. In this work, ultrathin flexible OTFTs are successfully fabricated based on spin-coated 2,7-dioctyl[1]benzothieno[3,2-b]benzothiophene (C8-BTBT) films. The resulting device thickness is only ~320 nm, so the device has the ability to adhere well to a three-dimension curved surface. The ultrathin C8-BTBT OTFTs exhibit a mobility as high as 4.36 cm2 V−1 s−1 and an on/off current ratio of over 106. These results indicate the substantial promise of our ultrathin flexible C8-BTBT OTFTs for next-generation flexible and conformal electronic devices. PMID:28792438
Optimum Aggregation and Control of Spatially Distributed Flexible Resources in Smart Grid
Bhattarai, Bishnu; Mendaza, Iker Diaz de Cerio; Myers, Kurt S.; ...
2017-03-24
This paper presents an algorithm to optimally aggregate spatially distributed flexible resources at strategic microgrid/smart-grid locations. The aggregation reduces a distribution network having thousands of nodes to an equivalent network with a few aggregated nodes, thereby enabling distribution system operators (DSOs) to make faster operational decisions. Moreover, the aggregation enables flexibility from small distributed flexible resources to be traded to different power and energy markets. A hierarchical control architecture comprising a combination of centralized and decentralized control approaches is proposed to practically deploy the aggregated flexibility. The proposed method serves as a great operational tool for DSOs to decide themore » exact amount of required flexibilities from different network section(s) for solving grid constraint violations. The effectiveness of the proposed method is demonstrated through simulation of three operational scenarios in a real low voltage distribution system having high penetrations of electric vehicles and heat pumps. Finally, the simulation results demonstrated that the aggregation helps DSOs not only in taking faster operational decisions, but also in effectively utilizing the available flexibility.« less
Optimum Aggregation and Control of Spatially Distributed Flexible Resources in Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattarai, Bishnu; Mendaza, Iker Diaz de Cerio; Myers, Kurt S.
This paper presents an algorithm to optimally aggregate spatially distributed flexible resources at strategic microgrid/smart-grid locations. The aggregation reduces a distribution network having thousands of nodes to an equivalent network with a few aggregated nodes, thereby enabling distribution system operators (DSOs) to make faster operational decisions. Moreover, the aggregation enables flexibility from small distributed flexible resources to be traded to different power and energy markets. A hierarchical control architecture comprising a combination of centralized and decentralized control approaches is proposed to practically deploy the aggregated flexibility. The proposed method serves as a great operational tool for DSOs to decide themore » exact amount of required flexibilities from different network section(s) for solving grid constraint violations. The effectiveness of the proposed method is demonstrated through simulation of three operational scenarios in a real low voltage distribution system having high penetrations of electric vehicles and heat pumps. Finally, the simulation results demonstrated that the aggregation helps DSOs not only in taking faster operational decisions, but also in effectively utilizing the available flexibility.« less
A Software Architecture for Adaptive Modular Sensing Systems
Lyle, Andrew C.; Naish, Michael D.
2010-01-01
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614
A software architecture for adaptive modular sensing systems.
Lyle, Andrew C; Naish, Michael D
2010-01-01
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.
A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.
Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin
2014-01-01
This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.
High Rate Digital Demodulator ASIC
NASA Technical Reports Server (NTRS)
Ghuman, Parminder; Sheikh, Salman; Koubek, Steve; Hoy, Scott; Gray, Andrew
1998-01-01
The architecture of High Rate (600 Mega-bits per second) Digital Demodulator (HRDD) ASIC capable of demodulating BPSK and QPSK modulated data is presented in this paper. The advantages of all-digital processing include increased flexibility and reliability with reduced reproduction costs. Conventional serial digital processing would require high processing rates necessitating a hardware implementation in other than CMOS technology such as Gallium Arsenide (GaAs) which has high cost and power requirements. It is more desirable to use CMOS technology with its lower power requirements and higher gate density. However, digital demodulation of high data rates in CMOS requires parallel algorithms to process the sampled data at a rate lower than the data rate. The parallel processing algorithms described here were developed jointly by NASA's Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL). The resulting all-digital receiver has the capability to demodulate BPSK, QPSK, OQPSK, and DQPSK at data rates in excess of 300 Mega-bits per second (Mbps) per channel. This paper will provide an overview of the parallel architecture and features of the HRDR ASIC. In addition, this paper will provide an over-view of the implementation of the hardware architectures used to create flexibility over conventional high rate analog or hybrid receivers. This flexibility includes a wide range of data rates, modulation schemes, and operating environments. In conclusion it will be shown how this high rate digital demodulator can be used with an off-the-shelf A/D and a flexible analog front end, both of which are numerically computer controlled, to produce a very flexible, low cost high rate digital receiver.
Control of flexible robots with prismatic joints and hydraulic drives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, L.J.; Kress, R.L.; Jansen, J.F.
1997-03-01
The design and control of long-reach, flexible manipulators has been an active research topic for over 20 years. Most of the research to date has focused on single link, fixed length, single plane of vibration test beds. In addition, actuation has been predominantly based upon electromagnetic motors. Ironically, these elements are rarely found in the existing industrial long-reach systems. One example is the Modified Light Duty Utility Arm (MLDUA) designed and built by Spar Aerospace for Oak Ridge National Laboratory (ORNL). This arm operates in larger, underground waste storage tanks located at ORNL. The size and nature of the tanksmore » require that the robot have a reach of approximately 15 ft and a payload capacity of 250 lb. In order to achieve these criteria, each joint is hydraulically actuated. Furthermore, the robot has a prismatic degree-of-freedom to ease deployment. When fully extended, the robot`s first natural frequency is 1.76 Hz. Many of the projected tasks, coupled with the robot`s flexibility, present an interesting problem. How will many of the existing flexure control algorithms perform on a hydraulic, long-reach manipulator with prismatic links? To minimize cost and risk of testing these algorithms on the MLDUA, the authors have designed a new test bed that contains many of the same elements. This manuscript described a new hydraulically actuated, long-reach manipulator with a flexible prismatic link at ORNL. Focus is directed toward both modeling and control of hydraulic actuators as well as flexible links that have variable natural frequencies.« less
Modelling the Shuttle Remote Manipulator System: Another flexible model
NASA Technical Reports Server (NTRS)
Barhorst, Alan A.
1993-01-01
High fidelity elastic system modeling algorithms are discussed. The particular system studied is the Space Shuttle Remote Manipulator System (RMS) undergoing full articulated motion. The model incorporates flexibility via a methodology the author has been developing. The technique is based in variational principles, so rigorous boundary condition generation and weak formulations for the associated partial differential equations are realized, yet the analyst need not integrate by parts. The methodology is formulated using vector-dyad notation with minimal use of tensor notation, therefore the technique is believed to be affable to practicing engineers. The objectives of this work are as follows: (1) determine the efficacy of the modeling method; and (2) determine if the method affords an analyst advantages in the overall modeling and simulation task. Generated out of necessity were Mathematica algorithms that quasi-automate the modeling procedure and simulation development. The project was divided into sections as follows: (1) model development of a simplified manipulator; (2) model development of the full-freedom RMS including a flexible movable base on a six degree of freedom orbiter (a rigid-body is attached to the manipulator end-effector); (3) simulation development for item 2; and (4) comparison to the currently used model of the flexible RMS in the Structures and Mechanics Division of NASA JSC. At the time of the writing of this report, items 3 and 4 above were not complete.
Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.
Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji
2015-12-01
We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.
Why don’t you use Evolutionary Algorithms in Big Data?
NASA Astrophysics Data System (ADS)
Stanovov, Vladimir; Brester, Christina; Kolehmainen, Mikko; Semenkina, Olga
2017-02-01
In this paper we raise the question of using evolutionary algorithms in the area of Big Data processing. We show that evolutionary algorithms provide evident advantages due to their high scalability and flexibility, their ability to solve global optimization problems and optimize several criteria at the same time for feature selection, instance selection and other data reduction problems. In particular, we consider the usage of evolutionary algorithms with all kinds of machine learning tools, such as neural networks and fuzzy systems. All our examples prove that Evolutionary Machine Learning is becoming more and more important in data analysis and we expect to see the further development of this field especially in respect to Big Data.
77 FR 7663 - Introduction to the Unified Agenda of Federal Regulatory and Deregulatory Actions
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-13
...The Regulatory Flexibility Act requires that agencies publish semiannual regulatory agendas in the Federal Register describing regulatory actions they are developing that may have a significant economic impact on a substantial number of small entities (5 U.S.C. 602). Executive Order 12866 ``Regulatory Planning and Review,'' signed September 30, 1993 (58 FR 51735), and Office of Management and Budget memoranda implementing section 4 of that Order establish minimum standards for agencies' agendas, including specific types of information for each entry. The Unified Agenda of Federal Regulatory and Deregulatory Actions (Unified Agenda) helps agencies fulfill these requirements. All Federal regulatory agencies have chosen to publish their regulatory agendas as part of the Unified Agenda. Editions of the Unified Agenda prior to fall 2007 were printed in their entirety in the Federal Register. Beginning with the fall 2007 edition, the Internet is the basic means for conveying regulatory agenda information to the maximum extent legally permissible. The complete Unified Agenda for fall 2011, which contains the regulatory agendas for 59 Federal agencies, is available to the public at http:// reginfo.gov. The fall 2011 Unified Agenda publication appearing in the Federal Register consists of agency regulatory flexibility agendas, in accordance with the publication requirements of the Regulatory Flexibility Act. Agency regulatory flexibility agendas contain only those Agenda entries for rules that are likely to have a significant economic impact on a substantial number of small entities and entries that have been selected for periodic review under section 610 of the Regulatory Flexibility Act.
Sinn, Chi-Ling Joanna; Jones, Aaron; McMullan, Janet Legge; Ackerman, Nancy; Curtin-Telegdi, Nancy; Eckel, Leslie; Hirdes, John P
2017-11-25
Personal support services enable many individuals to stay in their homes, but there are no standard ways to classify need for functional support in home and community care settings. The goal of this project was to develop an evidence-based clinical tool to inform service planning while allowing for flexibility in care coordinator judgment in response to patient and family circumstances. The sample included 128,169 Ontario home care patients assessed in 2013 and 25,800 Ontario community support clients assessed between 2014 and 2016. Independent variables were drawn from the Resident Assessment Instrument-Home Care and interRAI Community Health Assessment that are standardised, comprehensive, and fully compatible clinical assessments. Clinical expertise and regression analyses identified candidate variables that were entered into decision tree models. The primary dependent variable was the weekly hours of personal support calculated based on the record of billed services. The Personal Support Algorithm classified need for personal support into six groups with a 32-fold difference in average billed hours of personal support services between the highest and lowest group. The algorithm explained 30.8% of the variability in billed personal support services. Care coordinators and managers reported that the guidelines based on the algorithm classification were consistent with their clinical judgment and current practice. The Personal Support Algorithm provides a structured yet flexible decision-support framework that may facilitate a more transparent and equitable approach to the allocation of personal support services.
Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.
Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A
2011-01-01
Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-
A Novel Fast and Secure Approach for Voice Encryption Based on DNA Computing
NASA Astrophysics Data System (ADS)
Kakaei Kate, Hamidreza; Razmara, Jafar; Isazadeh, Ayaz
2018-06-01
Today, in the world of information communication, voice information has a particular importance. One way to preserve voice data from attacks is voice encryption. The encryption algorithms use various techniques such as hashing, chaotic, mixing, and many others. In this paper, an algorithm is proposed for voice encryption based on three different schemes to increase flexibility and strength of the algorithm. The proposed algorithm uses an innovative encoding scheme, the DNA encryption technique and a permutation function to provide a secure and fast solution for voice encryption. The algorithm is evaluated based on various measures including signal to noise ratio, peak signal to noise ratio, correlation coefficient, signal similarity and signal frequency content. The results demonstrate applicability of the proposed method in secure and fast encryption of voice files
NASA Astrophysics Data System (ADS)
Zou, Zhen-Zhen; Yu, Xu-Tao; Zhang, Zai-Chen
2018-04-01
At first, the entanglement source deployment problem is studied in a quantum multi-hop network, which has a significant influence on quantum connectivity. Two optimization algorithms are introduced with limited entanglement sources in this paper. A deployment algorithm based on node position (DNP) improves connectivity by guaranteeing that all overlapping areas of the distribution ranges of the entanglement sources contain nodes. In addition, a deployment algorithm based on an improved genetic algorithm (DIGA) is implemented by dividing the region into grids. From the simulation results, DNP and DIGA improve quantum connectivity by 213.73% and 248.83% compared to random deployment, respectively, and the latter performs better in terms of connectivity. However, DNP is more flexible and adaptive to change, as it stops running when all nodes are covered.
Personnel occupied woven envelope robot power
NASA Technical Reports Server (NTRS)
Wessling, F. C.
1988-01-01
The Personnel Occupied Woven Envelope Robot (POWER) concept has evolved over the course of the study. The goal of the project was the development of methods and algorithms for solid modeling for the flexible robot arm.
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves
Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018
Tractable Goal Selection with Oversubscribed Resources
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; McLaren, David
2009-01-01
We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.
Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design
NASA Technical Reports Server (NTRS)
Whorton, Mark; Buschek, Harald; Calise, Anthony J.
1996-01-01
Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.
Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis
NASA Technical Reports Server (NTRS)
Whorton, M.; Buschek, H.; Calise, A. J.
1994-01-01
A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.
Practical advantages of evolutionary computation
NASA Astrophysics Data System (ADS)
Fogel, David B.
1997-10-01
Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.
NASA Technical Reports Server (NTRS)
Chen, Shu-Po
1999-01-01
This paper presents software for solving the non-conforming fluid structure interfaces in aeroelastic simulation. It reviews the algorithm of interpolation and integration, highlights the flexibility and the user-friendly feature that allows the user to select the existing structure and fluid package, like NASTRAN and CLF3D, to perform the simulation. The presented software is validated by computing the High Speed Civil Transport model.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, R.
1986-01-01
A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
Optimal Superpositioning of Flexible Molecule Ensembles
Gapsys, Vytautas; de Groot, Bert L.
2013-01-01
Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072
A flexible approach to distributed data anonymization.
Kohlmayer, Florian; Prasser, Fabian; Eckert, Claudia; Kuhn, Klaus A
2014-08-01
Sensitive biomedical data is often collected from distributed sources, involving different information systems and different organizational units. Local autonomy and legal reasons lead to the need of privacy preserving integration concepts. In this article, we focus on anonymization, which plays an important role for the re-use of clinical data and for the sharing of research data. We present a flexible solution for anonymizing distributed data in the semi-honest model. Prior to the anonymization procedure, an encrypted global view of the dataset is constructed by means of a secure multi-party computing (SMC) protocol. This global representation can then be anonymized. Our approach is not limited to specific anonymization algorithms but provides pre- and postprocessing for a broad spectrum of algorithms and many privacy criteria. We present an extensive analytical and experimental evaluation and discuss which types of methods and criteria are supported. Our prototype demonstrates the approach by implementing k-anonymity, ℓ-diversity, t-closeness and δ-presence with a globally optimal de-identification method in horizontally and vertically distributed setups. The experiments show that our method provides highly competitive performance and offers a practical and flexible solution for anonymizing distributed biomedical datasets. Copyright © 2013 Elsevier Inc. All rights reserved.
Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking
Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng
2017-01-01
Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243
Innovative hyperchaotic encryption algorithm for compressed video
NASA Astrophysics Data System (ADS)
Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang
2002-12-01
It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.
OpenSim: A Flexible Distributed Neural Network Simulator with Automatic Interactive Graphics.
Jarosch, Andreas; Leber, Jean Francois
1997-06-01
An object-oriented simulator called OpenSim is presented that achieves a high degree of flexibility by relying on a small set of building blocks. The state variables and algorithms put in this framework can easily be accessed through a command shell. This allows one to distribute a large-scale simulation over several workstations and to generate the interactive graphics automatically. OpenSim opens new possibilities for cooperation among Neural Network researchers. Copyright 1997 Elsevier Science Ltd.
Experiments In Characterizing Vibrations Of A Structure
NASA Technical Reports Server (NTRS)
Yam, Yeung; Hadaegh, Fred Y.; Bayard, David S.
1993-01-01
Report discusses experiments conducted to test methods of identification of vibrational and coupled rotational/vibrational modes of flexible structure. Report one in series that chronicle development of integrated system of methods, sensors, actuators, analog and digital signal-processing equipment, and algorithms to suppress vibrations in large, flexible structure even when dynamics of structure partly unknown and/or changing. Two prior articles describing aspects of research, "Autonomous Frequency-Domain Indentification" (NPO-18099), and "Automated Characterization Of Vibrations Of A Structure" (NPO-18141).
Optimal output fast feedback in two-time scale control of flexible arms
NASA Technical Reports Server (NTRS)
Siciliano, B.; Calise, A. J.; Jonnalagadda, V. R. P.
1986-01-01
Control of lightweight flexible arms moving along predefined paths can be successfully synthesized on the basis of a two-time scale approach. A model following control can be designed for the reduced order slow subsystem. The fast subsystem is a linear system in which the slow variables act as parameters. The flexible fast variables which model the deflections of the arm along the trajectory can be sensed through strain gage measurements. For full state feedback design the derivatives of the deflections need to be estimated. The main contribution of this work is the design of an output feedback controller which includes a fixed order dynamic compensator, based on a recent convergent numerical algorithm for calculating LQ optimal gains. The design procedure is tested by means of simulation results for the one link flexible arm prototype in the laboratory.
A method of operation scheduling based on video transcoding for cluster equipment
NASA Astrophysics Data System (ADS)
Zhou, Haojie; Yan, Chun
2018-04-01
Because of the cluster technology in real-time video transcoding device, the application of facing the massive growth in the number of video assignments and resolution and bit rate of diversity, task scheduling algorithm, and analyze the current mainstream of cluster for real-time video transcoding equipment characteristics of the cluster, combination with the characteristics of the cluster equipment task delay scheduling algorithm is proposed. This algorithm enables the cluster to get better performance in the generation of the job queue and the lower part of the job queue when receiving the operation instruction. In the end, a small real-time video transcode cluster is constructed to analyze the calculation ability, running time, resource occupation and other aspects of various algorithms in operation scheduling. The experimental results show that compared with traditional clustering task scheduling algorithm, task delay scheduling algorithm has more flexible and efficient characteristics.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.
Swarm Intelligence in Text Document Clustering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xiaohui; Potok, Thomas E
2008-01-01
Social animals or insects in nature often exhibit a form of emergent collective behavior. The research field that attempts to design algorithms or distributed problem-solving devices inspired by the collective behavior of social insect colonies is called Swarm Intelligence. Compared to the traditional algorithms, the swarm algorithms are usually flexible, robust, decentralized and self-organized. These characters make the swarm algorithms suitable for solving complex problems, such as document collection clustering. The major challenge of today's information society is being overwhelmed with information on any topic they are searching for. Fast and high-quality document clustering algorithms play an important role inmore » helping users to effectively navigate, summarize, and organize the overwhelmed information. In this chapter, we introduce three nature inspired swarm intelligence clustering approaches for document clustering analysis. These clustering algorithms use stochastic and heuristic principles discovered from observing bird flocks, fish schools and ant food forage.« less
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; DePascale, Stephen M.; Wing, David J.
2012-01-01
The Autonomous Operations Planner (AOP), developed by NASA, is a flexible and powerful prototype of a flight-deck automation system to support self-separation of aircraft. The AOP incorporates a variety of algorithms to detect and resolve conflicts between the trajectories of its own aircraft and traffic aircraft while meeting route constraints such as required times of arrival and avoiding airspace hazards such as convective weather and restricted airspace. This integrated suite of algorithms provides flight crew support for strategic and tactical conflict resolutions and conflict-free trajectory planning while en route. The AOP has supported an extensive set of experiments covering various conditions and variations on the self-separation concept, yielding insight into the system s design and resolving various challenges encountered in the exploration of the concept. The design of the AOP will enable it to continue to evolve and support experimentation as the self-separation concept is refined.
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
A Generic Guidance and Control Structure for Six-Degree-of-Freedom Conceptual Aircraft Design
NASA Technical Reports Server (NTRS)
Cotting, M. Christopher; Cox, Timothy H.
2005-01-01
A control system framework is presented for both real-time and batch six-degree-of-freedom simulation. This framework allows stabilization and control with multiple command options, from body rate control to waypoint guidance. Also, pilot commands can be used to operate the simulation in a pilot-in-the-loop environment. This control system framework is created by using direct vehicle state feedback with nonlinear dynamic inversion. A direct control allocation scheme is used to command aircraft effectors. Online B-matrix estimation is used in the control allocation algorithm for maximum algorithm flexibility. Primary uses for this framework include conceptual design and early preliminary design of aircraft, where vehicle models change rapidly and a knowledge of vehicle six-degree-of-freedom performance is required. A simulated airbreathing hypersonic vehicle and a simulated high performance fighter are controlled to demonstrate the flexibility and utility of the control system.
Experimental study of adaptive pointing and tracking for large flexible space structures
NASA Technical Reports Server (NTRS)
Boussalis, D.; Bayard, D. S.; Ih, C.; Wang, S. J.; Ahmed, A.
1991-01-01
This paper describes an experimental study of adaptive pointing and tracking control for flexible spacecraft conducted on a complex ground experiment facility. The algorithm used in this study is based on a multivariable direct model reference adaptive control law. Several experimental validation studies were performed earlier using this algorithm for vibration damping and robust regulation, with excellent results. The current work extends previous studies by addressing the pointing and tracking problem. As is consistent with an adaptive control framework, the plant is assumed to be poorly known to the extent that only system level knowledge of its dynamics is available. Explicit bounds on the steady-state pointing error are derived as functions of the adaptive controller design parameters. It is shown that good tracking performance can be achieved in an experimental setting by adjusting adaptive controller design weightings according to the guidelines indicated by the analytical expressions for the error.
NASA Technical Reports Server (NTRS)
Doxley, Charles A.
2016-01-01
In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.
Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd
2014-09-15
The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.
Compact Active Vibration Control System for a Flexible Panel
NASA Technical Reports Server (NTRS)
Schiller, Noah H. (Inventor); Cabell, Randolph H. (Inventor); Perey, Daniel F. (Inventor)
2014-01-01
A diamond-shaped actuator for a flexible panel has an inter-digitated electrode (IDE) and a piezoelectric wafer portion positioned therebetween. The IDE and/or the wafer portion are diamond-shaped. Point sensors are positioned with respect to the actuator and measure vibration. The actuator generates and transmits a cancelling force to the panel in response to an output signal from a controller, which is calculated using a signal describing the vibration. A method for controlling vibration in a flexible panel includes connecting a diamond-shaped actuator to the flexible panel, and then connecting a point sensor to each actuator. Vibration is measured via the point sensor. The controller calculates a proportional output voltage signal from the measured vibration, and transmits the output signal to the actuator to substantially cancel the vibration in proximity to each actuator.
Generation of referring expressions: assessing the Incremental Algorithm.
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard
2012-07-01
A substantial amount of recent work in natural language generation has focused on the generation of ''one-shot'' referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We test this hypothesis by eliciting referring expressions from human subjects and computing the similarity between the expressions elicited and the ones generated by algorithms. It turns out that the success of the IA depends substantially on the ''preference order'' (PO) employed by the IA, particularly in complex domains. While some POs cause the IA to produce referring expressions that are very similar to expressions produced by human subjects, others cause the IA to perform worse than its main competitors; moreover, it turns out to be difficult to predict the success of a PO on the basis of existing psycholinguistic findings or frequencies in corpora. We also examine the computational complexity of the algorithms in question and argue that there are no compelling reasons for preferring the IA over some of its main competitors on these grounds. We conclude that future research on the generation of referring expressions should explore alternatives to the IA, focusing on algorithms, inspired by the Greedy Algorithm, which do not work with a fixed PO. Copyright © 2011 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.
2017-10-01
We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
NASA Astrophysics Data System (ADS)
Dervilllé, A.; Labrosse, A.; Zimmermann, Y.; Foucher, J.; Gronheid, R.; Boeckx, C.; Singh, A.; Leray, P.; Halder, S.
2016-03-01
The dimensional scaling in IC manufacturing strongly drives the demands on CD and defect metrology techniques and their measurement uncertainties. Defect review has become as important as CD metrology and both of them create a new metrology paradigm because it creates a completely new need for flexible, robust and scalable metrology software. Current, software architectures and metrology algorithms are performant but it must be pushed to another higher level in order to follow roadmap speed and requirements. For example: manage defect and CD in one step algorithm, customize algorithms and outputs features for each R&D team environment, provide software update every day or every week for R&D teams in order to explore easily various development strategies. The final goal is to avoid spending hours and days to manually tune algorithm to analyze metrology data and to allow R&D teams to stay focus on their expertise. The benefits are drastic costs reduction, more efficient R&D team and better process quality. In this paper, we propose a new generation of software platform and development infrastructure which can integrate specific metrology business modules. For example, we will show the integration of a chemistry module dedicated to electronics materials like Direct Self Assembly features. We will show a new generation of image analysis algorithms which are able to manage at the same time defect rates, images classifications, CD and roughness measurements with high throughput performances in order to be compatible with HVM. In a second part, we will assess the reliability, the customization of algorithm and the software platform capabilities to follow new specific semiconductor metrology software requirements: flexibility, robustness, high throughput and scalability. Finally, we will demonstrate how such environment has allowed a drastic reduction of data analysis cycle time.
Integrated Controls-Structures Design Methodology for Flexible Spacecraft
NASA Technical Reports Server (NTRS)
Maghami, P. G.; Joshi, S. M.; Price, D. B.
1995-01-01
This paper proposes an approach for the design of flexible spacecraft, wherein the structural design and the control system design are performed simultaneously. The integrated design problem is posed as an optimization problem in which both the structural parameters and the control system parameters constitute the design variables, which are used to optimize a common objective function, thereby resulting in an optimal overall design. The approach is demonstrated by application to the integrated design of a geostationary platform, and to a ground-based flexible structure experiment. The numerical results obtained indicate that the integrated design approach generally yields spacecraft designs that are substantially superior to the conventional approach, wherein the structural design and control design are performed sequentially.
A novel dynamic wavelength bandwidth allocation scheme over OFDMA PONs
NASA Astrophysics Data System (ADS)
Yan, Bo; Guo, Wei; Jin, Yaohui; Hu, Weisheng
2011-12-01
With rapid growth of Internet applications, supporting differentiated service and enlarging system capacity have been new tasks for next generation access system. In recent years, research in OFDMA Passive Optical Networks (PON) has experienced extraordinary development as for its large capacity and flexibility in scheduling. Although much work has been done to solve hardware layer obstacles for OFDMA PON, scheduling algorithm on OFDMA PON system is still under primary discussion. In order to support QoS service on OFDMA PON system, a novel dynamic wavelength bandwidth allocation (DWBA) algorithm is proposed in this paper. Per-stream QoS service is supported in this algorithm. Through simulation, we proved our bandwidth allocation algorithm performs better in bandwidth utilization and differentiate service support.
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)
1992-01-01
High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.
Dynamically Reconfigurable Systolic Array Accelorators
NASA Technical Reports Server (NTRS)
Dasu, Aravind (Inventor); Barnes, Robert C. (Inventor)
2014-01-01
A polymorphic systolic array framework that works in conjunction with an embedded microprocessor on an FPGA, that allows for dynamic and complimentary scaling of acceleration levels of two algorithms active concurrently on the FPGA. Use is made of systolic arrays and hardware-software co-design to obtain an efficient multi-application acceleration system. The flexible and simple framework allows hosting of a broader range of algorithms and extendable to more complex applications in the area of aerospace embedded systems.
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor); Daud, Taher (Inventor); Thakoor, Anilkumar P. (Inventor)
1995-01-01
High-speed, analog, fully-parallel and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware-compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A comparison-intensive feature classification application has been demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as application-specific-coprocessors for solving real-world problems at extremely high data rates.
NASA Astrophysics Data System (ADS)
Zhang, Junzhi; Li, Yutong; Lv, Chen; Gou, Jinfang; Yuan, Ye
2017-03-01
The flexibility of the electrified powertrain system elicits a negative effect upon the cooperative control performance between regenerative and hydraulic braking and the active damping control performance. Meanwhile, the connections among sensors, controllers, and actuators are realized via network communication, i.e., controller area network (CAN), that introduces time-varying delays and deteriorates the control performances of the closed-loop control systems. As such, the goal of this paper is to develop a control algorithm to cope with all these challenges. To this end, the models of the stochastic network induced time-varying delays, based on a real in-vehicle network topology and on a flexible electrified powertrain, were firstly built. In order to further enhance the control performances of active damping and cooperative control of regenerative and hydraulic braking, the time-varying delays compensation algorithm for the electrified powertrain active damping during regenerative braking was developed based on a predictive scheme. The augmented system is constructed and the H∞ performance is analyzed. Based on this analysis, the control gains are derived by solving a nonlinear minimization problem. The simulations and hardware-in-loop (HIL) tests were carried out to validate the effectiveness of the developed algorithm. The test results show that the active damping and cooperative control performances are enhanced significantly.
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
Emergence of an optimal search strategy from a simple random walk
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-01-01
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445
Emergence of an optimal search strategy from a simple random walk.
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-09-06
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.
LPV Modeling of a Flexible Wing Aircraft Using Modal Alignment and Adaptive Gridding Methods
NASA Technical Reports Server (NTRS)
Al-Jiboory, Ali Khudhair; Zhu, Guoming; Swei, Sean Shan-Min; Su, Weihua; Nguyen, Nhan T.
2017-01-01
One of the earliest approaches in gain-scheduling control is the gridding based approach, in which a set of local linear time-invariant models are obtained at various gridded points corresponding to the varying parameters within the flight envelop. In order to ensure smooth and effective Linear Parameter-Varying control, aligning all the flexible modes within each local model and maintaining small number of representative local models over the gridded parameter space are crucial. In addition, since the flexible structural models tend to have large dimensions, a tractable model reduction process is necessary. In this paper, the notion of s-shifted H2- and H Infinity-norm are introduced and used as a metric to measure the model mismatch. A new modal alignment algorithm is developed which utilizes the defined metric for aligning all the local models over the entire gridded parameter space. Furthermore, an Adaptive Grid Step Size Determination algorithm is developed to minimize the number of local models required to represent the gridded parameter space. For model reduction, we propose to utilize the concept of Composite Modal Cost Analysis, through which the collective contribution of each flexible mode is computed and ranked. Therefore, a reduced-order model is constructed by retaining only those modes with significant contribution. The NASA Generic Transport Model operating at various flight speeds is studied for verification purpose, and the analysis and simulation results demonstrate the effectiveness of the proposed modeling approach.
Zhou, Wenbin; Fan, Qingxia; Zhang, Qiang; Cai, Le; Li, Kewei; Gu, Xiaogang; Yang, Feng; Zhang, Nan; Wang, Yanchun; Liu, Huaping; Zhou, Weiya; Xie, Sishen
2017-01-01
It is a great challenge to substantially improve the practical performance of flexible thermoelectric modules due to the absence of air-stable n-type thermoelectric materials with high-power factor. Here an excellent flexible n-type thermoelectric film is developed, which can be conveniently and rapidly prepared based on the as-grown carbon nanotube continuous networks with high conductivity. The optimum n-type film exhibits ultrahigh power factor of ∼1,500 μW m−1 K−2 and outstanding stability in air without encapsulation. Inspired by the findings, we design and successfully fabricate the compact-configuration flexible TE modules, which own great advantages compared with the conventional π-type configuration modules and well integrate the superior thermoelectric properties of p-type and n-type carbon nanotube films resulting in a markedly high performance. Moreover, the research results are highly scalable and also open opportunities for the large-scale production of flexible thermoelectric modules. PMID:28337987
Modeling, Control, and Estimation of Flexible, Aerodynamic Structures
NASA Astrophysics Data System (ADS)
Ray, Cody W.
Engineers have long been inspired by nature’s flyers. Such animals navigate complex environments gracefully and efficiently by using a variety of evolutionary adaptations for high-performance flight. Biologists have discovered a variety of sensory adaptations that provide flow state feedback and allow flying animals to feel their way through flight. A specialized skeletal wing structure and plethora of robust, adaptable sensory systems together allow nature’s flyers to adapt to myriad flight conditions and regimes. In this work, motivated by biology and the successes of bio-inspired, engineered aerial vehicles, linear quadratic control of a flexible, morphing wing design is investigated, helping to pave the way for truly autonomous, mission-adaptive craft. The proposed control algorithm is demonstrated to morph a wing into desired positions. Furthermore, motivated specifically by the sensory adaptations organisms possess, this work transitions to an investigation of aircraft wing load identification using structural response as measured by distributed sensors. A novel, recursive estimation algorithm is utilized to recursively solve the inverse problem of load identification, providing both wing structural and aerodynamic states for use in a feedback control, mission-adaptive framework. The recursive load identification algorithm is demonstrated to provide accurate load estimate in both simulation and experiment.
A-VCI: A flexible method to efficiently compute vibrational spectra
NASA Astrophysics Data System (ADS)
Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2017-06-01
The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.
An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU
Xu, Hailong; Cui, Xiaowei; Lu, Mingquan
2016-01-01
Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363
A-VCI: A flexible method to efficiently compute vibrational spectra.
Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2017-06-07
The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm -1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm -1 is the most accurate computation that exists today on such systems.
An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.
Xu, Hailong; Cui, Xiaowei; Lu, Mingquan
2016-03-11
Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.
A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System
NASA Astrophysics Data System (ADS)
Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji
Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.
Development of a new time domain-based algorithm for train detection and axle counting
NASA Astrophysics Data System (ADS)
Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.
2015-12-01
This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.
2011-01-01
Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods:DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing∕registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. PMID:21361176
Goal Selection for Embedded Systems with Oversubscribed Resources
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; McLaren, David
2010-01-01
We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.
Onboard Run-Time Goal Selection for Autonomous Operations
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; McLaren, David
2010-01-01
We describe an efficient, online goal selection algorithm for use onboard spacecraft and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.
Complexity of the Quantum Adiabatic Algorithm
NASA Technical Reports Server (NTRS)
Hen, Itay
2013-01-01
The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.
Automated detection of open magnetic field regions in EUV images
NASA Astrophysics Data System (ADS)
Krista, Larisza Diana; Reinard, Alysha
2016-05-01
Open magnetic regions on the Sun are either long-lived (coronal holes) or transient (dimmings) in nature, but both appear as dark regions in EUV images. For this reason their detection can be done in a similar way. As coronal holes are often large and long-lived in comparison to dimmings, their detection is more straightforward. The Coronal Hole Automated Recognition and Monitoring (CHARM) algorithm detects coronal holes using EUV images and a magnetogram. The EUV images are used to identify dark regions, and the magnetogam allows us to determine if the dark region is unipolar - a characteristic of coronal holes. There is no temporal sensitivity in this process, since coronal hole lifetimes span days to months. Dimming regions, however, emerge and disappear within hours. Hence, the time and location of a dimming emergence need to be known to successfully identify them and distinguish them from regular coronal holes. Currently, the Coronal Dimming Tracker (CoDiT) algorithm is semi-automated - it requires the dimming emergence time and location as an input. With those inputs we can identify the dimming and track it through its lifetime. CoDIT has also been developed to allow the tracking of dimmings that split or merge - a typical feature of dimmings.The advantage of these particular algorithms is their ability to adapt to detecting different types of open field regions. For coronal hole detection, each full-disk solar image is processed individually to determine a threshold for the image, hence, we are not limited to a single pre-determined threshold. For dimming regions we also allow individual thresholds for each dimming, as they can differ substantially. This flexibility is necessary for a subjective analysis of the studied regions. These algorithms were developed with the goal to allow us better understand the processes that give rise to eruptive and non-eruptive open field regions. We aim to study how these regions evolve over time and what environmental factors influence their growth and decay over short and long time-periods (days to solar cycles).
Evaluation of Roadway Subsurface Drainage on Rural Routes
DOT National Transportation Integrated Search
2017-09-01
Excess moisture has been identified as a cause for stripping, raveling, debonding, and rutting in flexible pavement [ODOT, 2016a]. The Ohio Department of Transportation (ODOT) has been getting substantially less than the expected 15 year service life...
Texas M-E flexible pavement design system: literature review and proposed framework.
DOT National Transportation Integrated Search
2012-04-01
Recent developments over last several decades have offered an opportunity for more rational and rigorous pavement design procedures. Substantial work has already been completed in Texas, nationally, and internationally, in all aspects of modeling, ma...
The role of proline substitutions within flexible regions on thermostability of luciferase.
Yu, Haoran; Zhao, Yang; Guo, Chao; Gan, Yiru; Huang, He
2015-01-01
Improving the stability of firefly luciferase has been a critical issue for its wider industrial applications. Studies about hyperthermophile proteins show that flexibility could be an effective indicator to find out weak spots to engineering thermostability of proteins. However, the relationship among flexibility, activity and stability in most of proteins is unclear. Proline is the most rigid residue and can be introduced to rigidify flexible regions to enhance thermostability of proteins. We firstly apply three different methods, molecular dynamics (MD) simulation, B-FITTER and framework rigidity optimized dynamics algorithm (FRODA) to determine the flexible regions of Photinus pyralis luciferase: Fragment 197-207; Fragment 471-481 and Fragment 487-495. Then, introduction of proline is used to rigidify these flexible regions. Two mutants D476P and H489P within most flexible regions are finally designed. In the results, H489P mutant shows improved thermostability while maintaining its catalytic efficiency compared to that of wild type luciferase. Flexibility analysis confirms that the overall rigidity and local rigidity of H489P mutant are greatly strengthened. D476P mutant shows decreased thermosatbility and the reason for this is elucidated at the molecular level. S307P mutation is randomly chosen outside the flexible regions as a control. Thermostability analysis shows that S307P mutation has decreased kinetic stability and enhanced thermodynamic stability. Copyright © 2014 Elsevier B.V. All rights reserved.
Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery
Kim, Daeun; Haldar, Justin P.
2016-01-01
This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368
Extracting harmonic signal from a chaotic background with local linear model
NASA Astrophysics Data System (ADS)
Li, Chenlong; Su, Liyun
2017-02-01
In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.
Quantitative analysis of the flexibility effect of cisplatin on circular DNA
NASA Astrophysics Data System (ADS)
Ji, Chao; Zhang, Lingyun; Wang, Peng-Ye
2013-10-01
We study the effects of cisplatin on the circular configuration of DNA using atomic force microscopy (AFM) and observe that the DNA gradually transforms to a complex configuration with an intersection and interwound structures from a circlelike structure. An algorithm is developed to extract the configuration profiles of circular DNA from AFM images and the radius of gyration is used to describe the flexibility of circular DNA. The quantitative analysis of the circular DNA demonstrates that the radius of gyration gradually decreases and two processes on the change of flexibility of circular DNA are found as the cisplatin concentration increases. Furthermore, a model is proposed and discussed to explain the mechanism for understanding the complicated interaction between DNA and cisplatin.
Tax Reform Act of 1986: implications and trends.
Harris, R F
1988-10-01
The Tax Reform Act of 1986 contains several changes that substantially reduce economic flexibility for not-for-profit hospitals and healthcare systems. These changes, involving limited partnerships, investment tax credit, depreciation, and income deferral plans, among other items, carry several implications. Tax-motivated joint ventures will no longer be attractive to physician investors, donations to hospitals are expected to decline by up to 15 percent, and flexibility in attracting and retaining high-caliber employees is reduced. Efforts to reduce the federal budget deficit and renewed scrutiny of unrelated business income further jeopardize economic flexibility. Another threat is intensified Internal Revenue Service scrutiny of Form 990, which is filed by all not-for-profit organizations with $25,000 or more in annual gross receipts, and Form 990T, which is used to report unrelated business income. Measures to protect facilities' economic flexibility include careful return preparation, alternative recruitment tactics, objective opinions, refusal of high-risk deals, and outside appraisals.
Fabrication of fully transparent nanowire transistors for transparent and flexible electronics
NASA Astrophysics Data System (ADS)
Ju, Sanghyun; Facchetti, Antonio; Xuan, Yi; Liu, Jun; Ishikawa, Fumiaki; Ye, Peide; Zhou, Chongwu; Marks, Tobin J.; Janes, David B.
2007-06-01
The development of optically transparent and mechanically flexible electronic circuitry is an essential step in the effort to develop next-generation display technologies, including `see-through' and conformable products. Nanowire transistors (NWTs) are of particular interest for future display devices because of their high carrier mobilities compared with bulk or thin-film transistors made from the same materials, the prospect of processing at low temperatures compatible with plastic substrates, as well as their optical transparency and inherent mechanical flexibility. Here we report fully transparent In2O3 and ZnO NWTs fabricated on both glass and flexible plastic substrates, exhibiting high-performance n-type transistor characteristics with ~82% optical transparency. These NWTs should be attractive as pixel-switching and driving transistors in active-matrix organic light-emitting diode (AMOLED) displays. The transparency of the entire pixel area should significantly enhance aperture ratio efficiency in active-matrix arrays and thus substantially decrease power consumption.
Fabrication of fully transparent nanowire transistors for transparent and flexible electronics.
Ju, Sanghyun; Facchetti, Antonio; Xuan, Yi; Liu, Jun; Ishikawa, Fumiaki; Ye, Peide; Zhou, Chongwu; Marks, Tobin J; Janes, David B
2007-06-01
The development of optically transparent and mechanically flexible electronic circuitry is an essential step in the effort to develop next-generation display technologies, including 'see-through' and conformable products. Nanowire transistors (NWTs) are of particular interest for future display devices because of their high carrier mobilities compared with bulk or thin-film transistors made from the same materials, the prospect of processing at low temperatures compatible with plastic substrates, as well as their optical transparency and inherent mechanical flexibility. Here we report fully transparent In(2)O(3) and ZnO NWTs fabricated on both glass and flexible plastic substrates, exhibiting high-performance n-type transistor characteristics with approximately 82% optical transparency. These NWTs should be attractive as pixel-switching and driving transistors in active-matrix organic light-emitting diode (AMOLED) displays. The transparency of the entire pixel area should significantly enhance aperture ratio efficiency in active-matrix arrays and thus substantially decrease power consumption.
Recent update of the RPLUS2D/3D codes
NASA Technical Reports Server (NTRS)
Tsai, Y.-L. Peter
1991-01-01
The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.
Synthetic aperture radar signal data compression using block adaptive quantization
NASA Technical Reports Server (NTRS)
Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian
1994-01-01
This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.
Thrust vector control algorithm design for the Cassini spacecraft
NASA Technical Reports Server (NTRS)
Enright, Paul J.
1993-01-01
This paper describes a preliminary design of the thrust vector control algorithm for the interplanetary spacecraft, Cassini. Topics of discussion include flight software architecture, modeling of sensors, actuators, and vehicle dynamics, and controller design and analysis via classical methods. Special attention is paid to potential interactions with structural flexibilities and propellant dynamics. Controller performance is evaluated in a simulation environment built around a multi-body dynamics model, which contains nonlinear models of the relevant hardware and preliminary versions of supporting attitude determination and control functions.
An Overview of the Automated Dispatch Controller Algorithms in the System Advisor Model (SAM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiOrio, Nicholas A
2017-11-22
Three automatic dispatch modes have been added to the battery model within the System Adviser Model. These controllers have been developed to perform peak shaving in an automated fashion, providing users with a way to see the benefit of reduced demand charges without manually programming a complicated dispatch control. A flexible input option allows more advanced interaction with the automated controller. This document will describe the algorithms in detail and present brief results on its use and limitations.
Sliding GAIT Algorithm for the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE)
NASA Technical Reports Server (NTRS)
Townsend, Julie; Biesiadecki, Jeffrey
2012-01-01
The design of a surface robotic system typically involves a trade between the traverse speed of a wheeled rover and the terrain-negotiating capabilities of a multi-legged walker. The ATHLETE mobility system, with both articulated limbs and wheels, is uniquely capable of both driving and walking, and has the flexibility to employ additional hybrid mobility modes. This paper introduces the Sliding Gait, an intermediate mobility algorithm faster than walking with better terrain-handling capabilities than wheeled mobility.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
NASA Technical Reports Server (NTRS)
Abbott, David; Batten, Adam; Carpenter, David; Dunlop, John; Edwards, Graeme; Farmer, Tony; Gaffney, Bruce; Hedley, Mark; Hoschke, Nigel; Isaacs, Peter;
2008-01-01
This report describes the first phase of the implementation of the Concept Demonstrator. The Concept Demonstrator system is a powerful and flexible experimental test-bed platform for developing sensors, communications systems, and multi-agent based algorithms for an intelligent vehicle health monitoring system for deployment in aerospace vehicles. The Concept Demonstrator contains sensors and processing hardware distributed throughout the structure, and uses multi-agent algorithms to characterize impacts and determine an appropriate response to these impacts.
Reconfigurable Hardware for Compressing Hyperspectral Image Data
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua
2010-01-01
High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of the FPGAs makes it possible to effectively alter the design to some extent to satisfy different requirements without adding hardware. The implementation could be easily propagated to future FPGA generations and/or to custom application-specific integrated circuits.
A machine learning based approach to identify protected health information in Chinese clinical text.
Du, Liting; Xia, Chenxi; Deng, Zhaohua; Lu, Gary; Xia, Shuxu; Ma, Jingdong
2018-08-01
With the increasing application of electronic health records (EHRs) in the world, protecting private information in clinical text has drawn extensive attention from healthcare providers to researchers. De-identification, the process of identifying and removing protected health information (PHI) from clinical text, has been central to the discourse on medical privacy since 2006. While de-identification is becoming the global norm for handling medical records, there is a paucity of studies on its application on Chinese clinical text. Without efficient and effective privacy protection algorithms in place, the use of indispensable clinical information would be confined. We aimed to (i) describe the current process for PHI in China, (ii) propose a machine learning based approach to identify PHI in Chinese clinical text, and (iii) validate the effectiveness of the machine learning algorithm for de-identification in Chinese clinical text. Based on 14,719 discharge summaries from regional health centers in Ya'an City, Sichuan province, China, we built a conditional random fields (CRF) model to identify PHI in clinical text, and then used the regular expressions to optimize the recognition results of the PHI categories with fewer samples. We constructed a Chinese clinical text corpus with PHI tags through substantial manual annotation, wherein the descriptive statistics of PHI manifested its wide range and diverse categories. The evaluation showed with a high F-measure of 0.9878 that our CRF-based model had a good performance for identifying PHI in Chinese clinical text. The rapid adoption of EHR in the health sector has created an urgent need for tools that can parse patient specific information from Chinese clinical text. Our application of CRF algorithms for de-identification has shown the potential to meet this need by offering a highly accurate and flexible solution to analyzing Chinese clinical text. Copyright © 2018 Elsevier B.V. All rights reserved.
Cooperative pulses for pseudo-pure state preparation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Daxiu; Chang, Yan; Yang, Xiaodong, E-mail: steffen.glaser@tum.de, E-mail: xiaodong.yang@sibet.ac.cn
2014-06-16
Using an extended version of the optimal-control-based gradient ascent pulse engineering algorithm, cooperative (COOP) pulses are designed for multi-scan experiments to prepare pseudo-pure states in quantum computation. COOP pulses can cancel undesired signal contributions, complementing and generalizing phase cycles. They also provide more flexibility and, in particular, eliminate the need to select specific individual target states and achieve the fidelity of theoretical limit by flexibly choosing appropriate number of scans and duration of pulses. The COOP approach is experimentally demonstrated for three-qubit and four-qubit systems.
NASA Technical Reports Server (NTRS)
Reichard, Karl M.; Lindner, Douglas K.; Claus, Richard O.
1991-01-01
Modal domain optical fiber sensors have recently been employed in the implementation of system identification algorithms and the closed-loop control of vibrations in flexible structures. The mathematical model of the modal domain optical fiber sensor used in these applications, however, only accounted for the effects of strain in the direction of the fiber's longitudinal axis. In this paper, we extend this model to include the effects of arbitrary stress. Using this sensor model, we characterize the sensor's sensitivity and dynamic range.
Post-processing interstitialcy diffusion from molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Bhardwaj, U.; Bukkuru, S.; Warrier, M.
2016-01-01
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures is studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms.
NASA Astrophysics Data System (ADS)
Wang, Fu; Liu, Bo; Zhang, Lijia; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun
2017-07-01
Elastic software-defined optical networks greatly improve the flexibility of the optical switching network while it has brought challenges to the routing and spectrum assignment (RSA). A multilayer virtual topology model is proposed to solve RSA problems. Two RSA algorithms based on the virtual topology are proposed, which are the ant colony optimization (ACO) algorithm of minimum consecutiveness loss and the ACO algorithm of maximum spectrum consecutiveness. Due to the computing power of the control layer in the software-defined network, the routing algorithm avoids the frequent link-state information between routers. Based on the effect of the spectrum consecutiveness loss on the pheromone in the ACO, the path and spectrum of the minimal impact on the network are selected for the service request. The proposed algorithms have been compared with other algorithms. The results show that the proposed algorithms can reduce the blocking rate by at least 5% and perform better in spectrum efficiency. Moreover, the proposed algorithms can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness.
Flexible methods for segmentation evaluation: results from CT-based luggage screening.
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2014-01-01
Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.
Post-processing interstitialcy diffusion from molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhardwaj, U., E-mail: haptork@gmail.com; Bukkuru, S.; Warrier, M.
2016-01-15
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures ismore » studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms. -- Graphical abstract:.« less
Image analysis of multiple moving wood pieces in real time
NASA Astrophysics Data System (ADS)
Wang, Weixing
2006-02-01
This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.
NPLOT: an Interactive Plotting Program for NASTRAN Finite Element Models
NASA Technical Reports Server (NTRS)
Jones, G. K.; Mcentire, K. J.
1985-01-01
The NPLOT (NASTRAN Plot) is an interactive computer graphics program for plotting undeformed and deformed NASTRAN finite element models. Developed at NASA's Goddard Space Flight Center, the program provides flexible element selection and grid point, ASET and SPC degree of freedom labelling. It is easy to use and provides a combination menu and command driven user interface. NPLOT also provides very fast hidden line and haloed line algorithms. The hidden line algorithm in NPLOT proved to be both very accurate and several times faster than other existing hidden line algorithms. A fast spatial bucket sort and horizon edge computation are used to achieve this high level of performance. The hidden line and the haloed line algorithms are the primary features that make NPLOT different from other plotting programs.
A Scheduling Algorithm for Replicated Real-Time Tasks
NASA Technical Reports Server (NTRS)
Yu, Albert C.; Lin, Kwei-Jay
1991-01-01
We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.
NASA Astrophysics Data System (ADS)
Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee
2017-07-01
This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.
Image-algebraic design of multispectral target recognition algorithms
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
1994-06-01
In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.
A joint tracking method for NSCC based on WLS algorithm
NASA Astrophysics Data System (ADS)
Luo, Ruidan; Xu, Ying; Yuan, Hong
2017-12-01
Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.
Underwater seismic source. [for petroleum exploration
NASA Technical Reports Server (NTRS)
Yang, L. C. (Inventor)
1979-01-01
Apparatus for generating a substantially oscillation-free seismic signal for use in underwater petroleum exploration, including a bag with walls that are flexible but substantially inelastic, and a pressured gas supply for rapidly expanding the bag to its fully expanded condition is described. The inelasticity of the bag permits the application of high pressure gas to rapidly expand it to full size, without requiring a venting mechanism to decrease the pressure as the bag approaches a predetermined size to avoid breaking of the bag.
Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model
NASA Astrophysics Data System (ADS)
Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled
2018-03-01
The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.
NASA Astrophysics Data System (ADS)
Pan, Bing; Wang, Bo
2017-10-01
Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.
Predicting Protein-protein Association Rates using Coarse-grained Simulation and Machine Learning
NASA Astrophysics Data System (ADS)
Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao
2017-04-01
Protein-protein interactions dominate all major biological processes in living cells. We have developed a new Monte Carlo-based simulation algorithm to study the kinetic process of protein association. We tested our method on a previously used large benchmark set of 49 protein complexes. The predicted rate was overestimated in the benchmark test compared to the experimental results for a group of protein complexes. We hypothesized that this resulted from molecular flexibility at the interface regions of the interacting proteins. After applying a machine learning algorithm with input variables that accounted for both the conformational flexibility and the energetic factor of binding, we successfully identified most of the protein complexes with overestimated association rates and improved our final prediction by using a cross-validation test. This method was then applied to a new independent test set and resulted in a similar prediction accuracy to that obtained using the training set. It has been thought that diffusion-limited protein association is dominated by long-range interactions. Our results provide strong evidence that the conformational flexibility also plays an important role in regulating protein association. Our studies provide new insights into the mechanism of protein association and offer a computationally efficient tool for predicting its rate.
Predicting Protein–protein Association Rates using Coarse-grained Simulation and Machine Learning
Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao
2017-01-01
Protein–protein interactions dominate all major biological processes in living cells. We have developed a new Monte Carlo-based simulation algorithm to study the kinetic process of protein association. We tested our method on a previously used large benchmark set of 49 protein complexes. The predicted rate was overestimated in the benchmark test compared to the experimental results for a group of protein complexes. We hypothesized that this resulted from molecular flexibility at the interface regions of the interacting proteins. After applying a machine learning algorithm with input variables that accounted for both the conformational flexibility and the energetic factor of binding, we successfully identified most of the protein complexes with overestimated association rates and improved our final prediction by using a cross-validation test. This method was then applied to a new independent test set and resulted in a similar prediction accuracy to that obtained using the training set. It has been thought that diffusion-limited protein association is dominated by long-range interactions. Our results provide strong evidence that the conformational flexibility also plays an important role in regulating protein association. Our studies provide new insights into the mechanism of protein association and offer a computationally efficient tool for predicting its rate. PMID:28418043
Predicting Protein-protein Association Rates using Coarse-grained Simulation and Machine Learning.
Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao
2017-04-18
Protein-protein interactions dominate all major biological processes in living cells. We have developed a new Monte Carlo-based simulation algorithm to study the kinetic process of protein association. We tested our method on a previously used large benchmark set of 49 protein complexes. The predicted rate was overestimated in the benchmark test compared to the experimental results for a group of protein complexes. We hypothesized that this resulted from molecular flexibility at the interface regions of the interacting proteins. After applying a machine learning algorithm with input variables that accounted for both the conformational flexibility and the energetic factor of binding, we successfully identified most of the protein complexes with overestimated association rates and improved our final prediction by using a cross-validation test. This method was then applied to a new independent test set and resulted in a similar prediction accuracy to that obtained using the training set. It has been thought that diffusion-limited protein association is dominated by long-range interactions. Our results provide strong evidence that the conformational flexibility also plays an important role in regulating protein association. Our studies provide new insights into the mechanism of protein association and offer a computationally efficient tool for predicting its rate.
Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control
Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda
2017-01-01
Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations. PMID:28406449
Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control.
Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda
2017-04-13
Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations.
Pc-Based Floating Point Imaging Workstation
NASA Astrophysics Data System (ADS)
Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin
1989-07-01
The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.
NASA Technical Reports Server (NTRS)
Segallis, Greg P.; Wernlund, Jim V.; Corry, Glen
1993-01-01
This report is prepared by Harris Government Communication Systems Division for NASA Lewis Research Center under contract NAS3-25087. It is written in accordance with SOW section 4.0 (d) as detailed in section 2.6. The purpose of this document is to provide a summary of the program, performance results and analysis, and a technical assessment. The purpose of this program was to develop a flexible, high-speed CODEC that provides substantial coding gain while maintaining bandwidth efficiency for use in both continuous and bursted data environments for a variety of applications.
Holden, James Elliott; Perez, Julieta
2001-01-01
A molded, flexible pipe flange cover for temporarily covering a pipe flange and a pipe opening includes a substantially round center portion having a peripheral skirt portion depending from the center portion, the center portion adapted to engage a front side of the pipe flange and to seal the pipe opening. The peripheral skirt portion is formed to include a plurality of circumferentially spaced tabs, wherein free ends of the flexible tabs are formed with respective through passages adapted to receive a drawstring for pulling the tabs together on a back side of the pipe flange.
NASA Technical Reports Server (NTRS)
1981-01-01
Francis M. Rogallo and his wife Gertrude researched flexible controllable fabric airfoils with a delta, V-shaped, configuration for use on inexpensive private aircraft. They were issued a flex-wing patent and refined their designs. Development of Rogallo wings, used by U.S. Moyes, Inc. substantially broadened the flexible airfoil technology base which originated from NASA's reentry parachute. The Rogallo technology, particularly the airfoil frame was incorporated in the design of a kite by John Dickenson. The Dickenson kite served as prototype for the Australian Moyes line of hang gliders. Company no longer exists.
Hirose, Hitoshi; Sarosiek, Konrad; Cavarocchi, Nicholas C
2014-01-01
Gastrointestinal bleed (GIB) is a known complication in patients receiving nonpulsatile ventricular assist devices (VAD). Previously, we reported a new algorithm for the workup of GIB in VAD patients using deep bowel enteroscopy. In this new algorithm, patients underwent fewer procedures, received less transfusions, and took less time to make the diagnosis than the traditional GIB algorithm group. Concurrently, we reviewed the cost-effectiveness of this new algorithm compared with the traditional workup. The procedure charges for the diagnosis and treatment of each episode of GIB was ~ $2,902 in the new algorithm group versus ~ $9,013 in the traditional algorithm group (p < 0.0001). Following the new algorithm in VAD patients with GIB resulted in fewer transfusions and diagnostic tests while attaining a substantial cost savings per episode of bleeding.
NASA Astrophysics Data System (ADS)
Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying
2017-09-01
A systematic dynamic modeling methodology is presented to develop the rigid-flexible coupling dynamic model (RFDM) of an emerging flexible parallel manipulator with multiple actuation modes. By virtue of assumed mode method, the general dynamic model of an arbitrary flexible body with any number of lumped parameters is derived in an explicit closed form, which possesses the modular characteristic. Then the completely dynamic model of system is formulated based on the flexible multi-body dynamics (FMD) theory and the augmented Lagrangian multipliers method. An approach of combining the Udwadia-Kalaba formulation with the hybrid TR-BDF2 numerical algorithm is proposed to address the nonlinear RFDM. Two simulation cases are performed to investigate the dynamic performance of the manipulator with different actuation modes. The results indicate that the redundant actuation modes can effectively attenuate vibration and guarantee higher dynamic performance compared to the traditional non-redundant actuation modes. Finally, a virtual prototype model is developed to demonstrate the validity of the presented RFDM. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and controller design of other planar flexible parallel manipulators, especially the emerging ones with multiple actuation modes.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Aerial surveillance based on hierarchical object classification for ground target detection
NASA Astrophysics Data System (ADS)
Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo
2015-03-01
Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.
75 FR 9777 - Magnet Schools Assistance Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-04
... the district-wide average of minority group students. This new flexibility is necessary to permit... elementary and secondary schools'' with substantial proportions of minority students, and ``the development and design of innovative educational methods and practices that promote diversity.'' 20 U.S.C. 7231...
Research of improved banker algorithm
NASA Astrophysics Data System (ADS)
Yuan, Xingde; Xu, Hong; Qiao, Shijiao
2013-03-01
In the multi-process operating system, resource management strategy of system is a critical global issue, especially when many processes implicating for the limited resources, since unreasonable scheduling will cause dead lock. The most classical solution for dead lock question is the banker algorithm; however, it has its own deficiency and only can avoid dead lock occurring in a certain extent. This article aims at reducing unnecessary safety checking, and then uses the new allocation strategy to improve the banker algorithm. Through full analysis and example verification of the new allocation strategy, the results show the improved banker algorithm obtains substantial increase in performance.
Value of flexible bronchoscopy in the pre-operative work-up of solitary pulmonary nodules.
Schwarz, Carsten; Schönfeld, Nicolas; Bittner, Roland C; Mairinger, Thomas; Rüssmann, Holger; Bauer, Torsten T; Kaiser, Dirk; Loddenkemper, Robert
2013-01-01
The diagnostic value of flexible bronchoscopy in the pre-operative work-up of solitary pulmonary nodules (SPN) is still under debate among pneumologists, radiologists and thoracic surgeons. In a prospective observational manner, flexible bronchoscopy was routinely performed in 225 patients with SPN of unknown origin. Of the 225 patients, 80.5% had lung cancer, 7.6% had metastasis of an extrapulmonary primary tumour and 12% had benign aetiology. Unsuspected endobronchial involvement was found in 4.4% of all 225 patients (or in 5.5% of patients with lung cancer). In addition, flexible bronchoscopy clarified the underlying aetiology in 41% of the cases. The bronchoscopic biopsy results from the SPN were positive in 84 (46.5%) patients with lung cancer. Surgery was cancelled due to the results of flexible bronchoscopy in four cases (involvement of the right main bronchus (impaired pulmonary function did not allow pneumonectomy) n=1, small cell lung cancer n=1, bacterial pneumonia n=2), and the surgical strategy had to be modified to bilobectomy in one patient. Flexible bronchoscopy changed the planned surgical approach in five cases substantially. These results suggest that routine flexible bronchoscopy should be included in the regular pre-operative work-up of patients with SPN.
Development and evaluation of an articulated registration algorithm for human skeleton registration
NASA Astrophysics Data System (ADS)
Yip, Stephen; Perk, Timothy; Jeraj, Robert
2014-03-01
Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index—DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the skeletons were deformed. Articulated registration is superior to rigid and deformable registrations by capturing global flexibility while preserving local rigidity inherent in skeleton registration. Therefore, articulated registration can be employed to accurately register the whole-body human skeletons, and it enables the treatment response assessment of various bone diseases.
SU-F-J-10: Sliding Mode Control of a SMA Actuated Active Flexible Needle for Medical Procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podder, T
Purpose: In medical interventional procedures such as brachytherapy, ablative therapies and biopsy precise steering and accurate placement of needles are very important for anatomical obstacle avoidance and accurate targeting. This study presents the efficacy of a sliding mode controller for Shape Memory Alloy (SMA) actuated flexible needle for medical procedures. Methods: Second order system dynamics of the SMA actuated active flexible needle was used for deriving the sliding mode control equations. Both proportional-integral-derivative (PID) and adaptive PID sliding mode control (APIDSMC) algorithms were developed and implemented. The flexible needle was attached at the end of a 6 DOF robotic system.more » Through LabView programming environment, the control commands were generated using the PID and APIDSMC algorithms. Experiments with artificial tissue mimicking phantom were performed to evaluate the performance of the controller. The actual needle tip position was obtained using an electromagnetic (EM) tracking sensor (Aurora, NDI, waterloo, Canada) at a sampling period of 1ms. During experiment, external disturbances were created applying force and thermal shock to investigate the robustness of the controllers. Results: The root mean square error (RMSE) values for APIDSMC and PID controllers were 0.75 mm and 0.92 mm, respectively, for sinusoidal reference input. In the presence of external disturbances, the APIDSMC controller showed much smoother and less overshooting response compared to that of the PID controller. Conclusion: Performance of the APIDSMC was superior to the PID controller. The APIDSMC was proved to be more effective controller in compensating the SMA uncertainties and external disturbances with clinically acceptable thresholds.« less
Parallel algorithms for boundary value problems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are two fold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.
Medical image segmentation using genetic algorithms.
Maulik, Ujjwal
2009-03-01
Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Vanrosendale, John
1989-01-01
Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.
Environmental Fluctuations and Acoustic Data Communications
2015-09-30
July 2011 along with subsequent analysis of the experiment data. KAM11 Experiment (2011) A shallow water acoustic communications experiment...packet and packet-to-packet variability. Algorithm Design and Experiment Data Analysis Communication receiver algorithm design for shallow water is...exhibited substantial daily oceanographic variability. Analysis of the KAM11 experiment data this past year has focused on fixed source transmissions
W. Wang; J.J. Qu; X. Hao; Y. Liu
2009-01-01
In the southeastern United States, most wildland fires are of low intensity. A substantial number of these fires cannot be detected by the MODIS contextual algorithm. To improve the accuracy of fire detection for this region, the remote-sensed characteristics of these fires have to be...
QPSO-Based Adaptive DNA Computing Algorithm
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409
Global mapping of DNA conformational flexibility on Saccharomyces cerevisiae.
Menconi, Giulia; Bedini, Andrea; Barale, Roberto; Sbrana, Isabella
2015-04-01
In this study we provide the first comprehensive map of DNA conformational flexibility in Saccharomyces cerevisiae complete genome. Flexibility plays a key role in DNA supercoiling and DNA/protein binding, regulating DNA transcription, replication or repair. Specific interest in flexibility analysis concerns its relationship with human genome instability. Enrichment in flexible sequences has been detected in unstable regions of human genome defined fragile sites, where genes map and carry frequent deletions and rearrangements in cancer. Flexible sequences have been suggested to be the determinants of fragile gene proneness to breakage; however, their actual role and properties remain elusive. Our in silico analysis carried out genome-wide via the StabFlex algorithm, shows the conserved presence of highly flexible regions in budding yeast genome as well as in genomes of other Saccharomyces sensu stricto species. Flexibile peaks in S. cerevisiae identify 175 ORFs mapping on their 3'UTR, a region affecting mRNA translation, localization and stability. (TA)n repeats of different extension shape the central structure of peaks and co-localize with polyadenylation efficiency element (EE) signals. ORFs with flexible peaks share common features. Transcripts are characterized by decreased half-life: this is considered peculiar of genes involved in regulatory systems with high turnover; consistently, their function affects biological processes such as cell cycle regulation or stress response. Our findings support the functional importance of flexibility peaks, suggesting that the flexible sequence may be derived by an expansion of canonical TAYRTA polyadenylation efficiency element. The flexible (TA)n repeat amplification could be the outcome of an evolutionary neofunctionalization leading to a differential 3'-end processing and expression regulation in genes with peculiar function. Our study provides a new support to the functional role of flexibility in genomes and a strategy for its characterization inside human fragile sites.
Global Mapping of DNA Conformational Flexibility on Saccharomyces cerevisiae
Menconi, Giulia; Bedini, Andrea; Barale, Roberto; Sbrana, Isabella
2015-01-01
In this study we provide the first comprehensive map of DNA conformational flexibility in Saccharomyces cerevisiae complete genome. Flexibility plays a key role in DNA supercoiling and DNA/protein binding, regulating DNA transcription, replication or repair. Specific interest in flexibility analysis concerns its relationship with human genome instability. Enrichment in flexible sequences has been detected in unstable regions of human genome defined fragile sites, where genes map and carry frequent deletions and rearrangements in cancer. Flexible sequences have been suggested to be the determinants of fragile gene proneness to breakage; however, their actual role and properties remain elusive. Our in silico analysis carried out genome-wide via the StabFlex algorithm, shows the conserved presence of highly flexible regions in budding yeast genome as well as in genomes of other Saccharomyces sensu stricto species. Flexibile peaks in S. cerevisiae identify 175 ORFs mapping on their 3’UTR, a region affecting mRNA translation, localization and stability. (TA)n repeats of different extension shape the central structure of peaks and co-localize with polyadenylation efficiency element (EE) signals. ORFs with flexible peaks share common features. Transcripts are characterized by decreased half-life: this is considered peculiar of genes involved in regulatory systems with high turnover; consistently, their function affects biological processes such as cell cycle regulation or stress response. Our findings support the functional importance of flexibility peaks, suggesting that the flexible sequence may be derived by an expansion of canonical TAYRTA polyadenylation efficiency element. The flexible (TA)n repeat amplification could be the outcome of an evolutionary neofunctionalization leading to a differential 3’-end processing and expression regulation in genes with peculiar function. Our study provides a new support to the functional role of flexibility in genomes and a strategy for its characterization inside human fragile sites. PMID:25860149
Direct model reference adaptive control of a flexible robotic manipulator
NASA Technical Reports Server (NTRS)
Meldrum, D. R.
1985-01-01
Quick, precise control of a flexible manipulator in a space environment is essential for future Space Station repair and satellite servicing. Numerous control algorithms have proven successful in controlling rigid manipulators wih colocated sensors and actuators; however, few have been tested on a flexible manipulator with noncolocated sensors and actuators. In this thesis, a model reference adaptive control (MRAC) scheme based on command generator tracker theory is designed for a flexible manipulator. Quicker, more precise tracking results are expected over nonadaptive control laws for this MRAC approach. Equations of motion in modal coordinates are derived for a single-link, flexible manipulator with an actuator at the pinned-end and a sensor at the free end. An MRAC is designed with the objective of controlling the torquing actuator so that the tip position follows a trajectory that is prescribed by the reference model. An appealing feature of this direct MRAC law is that it allows the reference model to have fewer states than the plant itself. Direct adaptive control also adjusts the controller parameters directly with knowledge of only the plant output and input signals.
NASA Astrophysics Data System (ADS)
Dubovik, O.; Litvinov, P.; Lapyonok, T.; Ducos, F.; Fuertes, D.; Huang, X.; Torres, B.; Aspetsberger, M.; Federspiel, C.
2014-12-01
The POLDER imager on board of the PARASOL micro-satellite is the only satellite polarimeter provided ~ 9 years extensive record of detailed polarmertic observations of Earth atmosphere from space. POLDER / PARASOL registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. Such observations have very high sensitivity to the variability of the properties of atmosphere and underlying surface and can not be adequately interpreted using look-up-table retrieval algorithms developed for analyzing mono-viewing intensity only observations traditionally used in atmospheric remote sensing. Therefore, a new enhanced retrieval algorithm GRASP (Generalized Retrieval of Aerosol and Surface Properties) has been developed and applied for processing of PARASOL data. GRASP relies on highly optimized statistical fitting of observations and derives large number of unknowns for each observed pixel. The algorithm uses elaborated model of the atmosphere and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are implemented during inversion and no look-up tables are used. The algorithm is very flexible in utilization of various types of a priori constraints on the retrieved characteristics and in parameterization of surface - atmosphere system. It is also optimized for high performance calculations. The results of the PARASOL data processing will be presented with the emphasis on the discussion of transferability and adaptability of the developed retrieval concept for processing polarimetric observations of other planets. For example, flexibility and possible alternative in modeling properties of aerosol polydisperse mixtures, particle composition and shape, reflectance of surface, etc. will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banks, J.W., E-mail: banksj3@rpi.edu; Henshaw, W.D., E-mail: henshw@rpi.edu; Kapila, A.K., E-mail: kapila@rpi.edu
We describe an added-mass partitioned (AMP) algorithm for solving fluid–structure interaction (FSI) problems involving inviscid compressible fluids interacting with nonlinear solids that undergo large rotations and displacements. The computational approach is a mixed Eulerian–Lagrangian scheme that makes use of deforming composite grids (DCG) to treat large changes in the geometry in an accurate, flexible, and robust manner. The current work extends the AMP algorithm developed in Banks et al. [1] for linearly elasticity to the case of nonlinear solids. To ensure stability for the case of light solids, the new AMP algorithm embeds an approximate solution of a nonlinear fluid–solidmore » Riemann (FSR) problem into the interface treatment. The solution to the FSR problem is derived and shown to be of a similar form to that derived for linear solids: the state on the interface being fundamentally an impedance-weighted average of the fluid and solid states. Numerical simulations demonstrate that the AMP algorithm is stable even for light solids when added-mass effects are large. The accuracy and stability of the AMP scheme is verified by comparison to an exact solution using the method of analytical solutions and to a semi-analytical solution that is obtained for a rotating solid disk immersed in a fluid. The scheme is applied to the simulation of a planar shock impacting a light elliptical-shaped solid, and comparisons are made between solutions of the FSI problem for a neo-Hookean solid, a linearly elastic solid, and a rigid solid. The ability of the approach to handle large deformations is demonstrated for a problem of a high-speed flow past a light, thin, and flexible solid beam.« less
Fast and Flexible Multivariate Time Series Subsequence Search
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Oza, Nikunj C.; Zhu, Qiang; Srivastava, Ashok N.
2010-01-01
Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical monitoring, and financial systems. Domain experts are often interested in searching for interesting multivariate patterns from these MTS databases which often contain several gigabytes of data. Surprisingly, research on MTS search is very limited. Most of the existing work only supports queries with the same length of data, or queries on a fixed set of variables. In this paper, we propose an efficient and flexible subsequence search framework for massive MTS databases, that, for the first time, enables querying on any subset of variables with arbitrary time delays between them. We propose two algorithms to solve this problem (1) a List Based Search (LBS) algorithm which uses sorted lists for indexing, and (2) a R*-tree Based Search (RBS) which uses Minimum Bounding Rectangles (MBR) to organize the subsequences. Both algorithms guarantee that all matching patterns within the specified thresholds will be returned (no false dismissals). The very few false alarms can be removed by a post-processing step. Since our framework is also capable of Univariate Time-Series (UTS) subsequence search, we first demonstrate the efficiency of our algorithms on several UTS datasets previously used in the literature. We follow this up with experiments using two large MTS databases from the aviation domain, each containing several millions of observations. Both these tests show that our algorithms have very high prune rates (>99%) thus needing actual disk access for only less than 1% of the observations. To the best of our knowledge, MTS subsequence search has never been attempted on datasets of the size we have used in this paper.
Linear-time general decoding algorithm for the surface code
NASA Astrophysics Data System (ADS)
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
Merlyn J. Paulson
1979-01-01
This paper outlines a project level process (V.I.S.) which utilizes very accurate and flexible computer algorithms in combination with contemporary site analysis and design techniques for visual evaluation, design and management. The process provides logical direction and connecting bridges through problem identification, information collection and verification, visual...
Using Stan for Item Response Theory Models
ERIC Educational Resources Information Center
Ames, Allison J.; Au, Chi Hang
2018-01-01
Stan is a flexible probabilistic programming language providing full Bayesian inference through Hamiltonian Monte Carlo algorithms. The benefits of Hamiltonian Monte Carlo include improved efficiency and faster inference, when compared to other MCMC software implementations. Users can interface with Stan through a variety of computing…
Xi-cam: Flexible High Throughput Data Processing for GISAXS
NASA Astrophysics Data System (ADS)
Pandolfi, Ronald; Kumar, Dinesh; Venkatakrishnan, Singanallur; Sarje, Abinav; Krishnan, Hari; Pellouchoud, Lenson; Ren, Fang; Fournier, Amanda; Jiang, Zhang; Tassone, Christopher; Mehta, Apurva; Sethian, James; Hexemer, Alexander
With increasing capabilities and data demand for GISAXS beamlines, supporting software is under development to handle larger data rates, volumes, and processing needs. We aim to provide a flexible and extensible approach to GISAXS data treatment as a solution to these rising needs. Xi-cam is the CAMERA platform for data management, analysis, and visualization. The core of Xi-cam is an extensible plugin-based GUI platform which provides users an interactive interface to processing algorithms. Plugins are available for SAXS/GISAXS data and data series visualization, as well as forward modeling and simulation through HipGISAXS. With Xi-cam's advanced mode, data processing steps are designed as a graph-based workflow, which can be executed locally or remotely. Remote execution utilizes HPC or de-localized resources, allowing for effective reduction of high-throughput data. Xi-cam is open-source and cross-platform. The processing algorithms in Xi-cam include parallel cpu and gpu processing optimizations, also taking advantage of external processing packages such as pyFAI. Xi-cam is available for download online.
Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker
NASA Astrophysics Data System (ADS)
Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong
2017-10-01
Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.
NASA Technical Reports Server (NTRS)
Lewis, M. C.
1984-01-01
Validation data from the Transonic Self-Streamlining Wind Tunnel has proved the feasibility of streamlining two dimensional flexible walls at low speeds and up to transonic speeds, the upper limit being the speed where the flexible walls are just supercritical. At this condition, breakdown of the wall setting strategy is evident in that convergence is neither as rapid nor as stable as for lower speeds, and wall streamlining criteria are not always completely satisfied. The only major step necessary to permit the extension of two dimensional testing into higher transonic speeds is the provision of a rapid algorithm to solve for mixed flow in the imagery flow fields. The status of two dimensional high transonic testing in the Transonic Self-Streamlining Wind Tunnel is outlined and, in particular, the progress of adapting an algorithm, which solves the Transonic Small Perturbation Equation, for predicting the imagery flow fields is detailed.
NASA Astrophysics Data System (ADS)
Liu, Zhihui; Wang, Haitao; Dong, Tao; Yin, Jie; Zhang, Tingting; Guo, Hui; Li, Dequan
2018-02-01
In this paper, the cognitive multi-beam satellite system, i.e., two satellite networks coexist through underlay spectrum sharing, is studied, and the power and spectrum allocation method is employed for interference control and throughput maximization. Specifically, the multi-beam satellite with flexible payload reuses the authorized spectrum of the primary satellite, adjusting its transmission band as well as power for each beam to limit its interference on the primary satellite below the prescribed threshold and maximize its own achievable rate. This power and spectrum allocation problem is formulated as a mixed nonconvex programming. For effective solving, we first introduce the concept of signal to leakage plus noise ratio (SLNR) to decouple multiple transmit power variables in the both objective and constraint, and then propose a heuristic algorithm to assign spectrum sub-bands. After that, a stepwise plus slice-wise algorithm is proposed to implement the discrete power allocation. Finally, simulation results show that adopting cognitive technology can improve spectrum efficiency of the satellite communication.
Compound prism design principles, I
Hagen, Nathan; Tkaczyk, Tomasz S.
2011-01-01
Prisms have been needlessly neglected as components used in modern optical design. In optical throughput, stray light, flexibility, and in their ability to be used in direct-view geometry, they excel over gratings. Here we show that even their well-known weak dispersion relative to gratings has been overrated by designing doublet and double Amici direct-vision compound prisms that have 14° and 23° of dispersion across the visible spectrum, equivalent to 800 and 1300 lines/mm gratings. By taking advantage of the multiple degrees of freedom available in a compound prism design, we also show prisms whose angular dispersion shows improved linearity in wavelength. In order to achieve these designs, we exploit the well-behaved nature of prism design space to write customized algorithms that optimize directly in the nonlinear design space. Using these algorithms, we showcase a number of prism designs that illustrate a performance and flexibility that goes beyond what has often been considered possible with prisms. PMID:22423145
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
Flexible language constructs for large parallel programs
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Schnabel, Robert
1993-01-01
The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.
Support Vector Data Descriptions and k-Means Clustering: One Class?
Gornitz, Nico; Lima, Luiz Alberto; Muller, Klaus-Robert; Kloft, Marius; Nakajima, Shinichi
2017-09-27
We present ClusterSVDD, a methodology that unifies support vector data descriptions (SVDDs) and k-means clustering into a single formulation. This allows both methods to benefit from one another, i.e., by adding flexibility using multiple spheres for SVDDs and increasing anomaly resistance and flexibility through kernels to k-means. In particular, our approach leads to a new interpretation of k-means as a regularized mode seeking algorithm. The unifying formulation further allows for deriving new algorithms by transferring knowledge from one-class learning settings to clustering settings and vice versa. As a showcase, we derive a clustering method for structured data based on a one-class learning scenario. Additionally, our formulation can be solved via a particularly simple optimization scheme. We evaluate our approach empirically to highlight some of the proposed benefits on artificially generated data, as well as on real-world problems, and provide a Python software package comprising various implementations of primal and dual SVDD as well as our proposed ClusterSVDD.
The wavenumber algorithm for full-matrix imaging using an ultrasonic array.
Hunter, Alan J; Drinkwater, Bruce W; Wilcox, Paul D
2008-11-01
Ultrasonic imaging using full-matrix capture, e.g., via the total focusing method (TFM), has been shown to increase angular inspection coverage and improve sensitivity to small defects in nondestructive evaluation. In this paper, we develop a Fourier-domain approach to full-matrix imaging based on the wavenumber algorithm used in synthetic aperture radar and sonar. The extension to the wavenumber algorithm for full-matrix data is described and the performance of the new algorithm compared with the TFM, which we use as a representative benchmark for the time-domain algorithms. The wavenumber algorithm provides a mathematically rigorous solution to the inverse problem for the assumed forward wave propagation model, whereas the TFM employs heuristic delay-and-sum beamforming. Consequently, the wavenumber algorithm has an improved point-spread function and provides better imagery. However, the major advantage of the wavenumber algorithm is its superior computational performance. For large arrays and images, the wavenumber algorithm is several orders of magnitude faster than the TFM. On the other hand, the key advantage of the TFM is its flexibility. The wavenumber algorithm requires a regularly sampled linear array, while the TFM can handle arbitrary imaging geometries. The TFM and the wavenumber algorithm are compared using simulated and experimental data.
Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models
NASA Astrophysics Data System (ADS)
Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo
2014-04-01
We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.
Wind Power Ramping Product for Increasing Power System Flexibility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Mingjian; Zhang, Jie; Wu, Hongyu
With increasing penetrations of wind power, system operators are concerned about a potential lack of system flexibility and ramping capacity in real-time dispatch stages. In this paper, a modified dispatch formulation is proposed considering the wind power ramping product (WPRP). A swinging door algorithm (SDA) and dynamic programming are combined and used to detect WPRPs in the next scheduling periods. The detected WPRPs are included in the unit commitment (UC) formulation considering ramping capacity limits, active power limits, and flexible ramping requirements. The modified formulation is solved by mixed integer linear programming. Numerical simulations on a modified PJM 5-bus Systemmore » show the effectiveness of the model considering WPRP, which not only reduces the production cost but also does not affect the generation schedules of thermal units.« less
Improving KPCA Online Extraction by Orthonormalization in the Feature Space.
Souza Filho, Joao B O; Diniz, Paulo S R
2018-04-01
Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
New optimization model for routing and spectrum assignment with nodes insecurity
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-04-01
By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.
A GENERAL ALGORITHM FOR THE CONSTRUCTION OF CONTOUR PLOTS
NASA Technical Reports Server (NTRS)
Johnson, W.
1994-01-01
The graphical presentation of experimentally or theoretically generated data sets frequently involves the construction of contour plots. A general computer algorithm has been developed for the construction of contour plots. The algorithm provides for efficient and accurate contouring with a modular approach which allows flexibility in modifying the algorithm for special applications. The algorithm accepts as input data values at a set of points irregularly distributed over a plane. The algorithm is based on an interpolation scheme in which the points in the plane are connected by straight line segments to form a set of triangles. In general, the data is smoothed using a least-squares-error fit of the data to a bivariate polynomial. To construct the contours, interpolation along the edges of the triangles is performed, using the bivariable polynomial if data smoothing was performed. Once the contour points have been located, the contour may be drawn. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 100K of 8-bit bytes. This computer algorithm was developed in 1981.
Numerical simulation on a straight-bladed vertical axis wind turbine with auxiliary blade
NASA Astrophysics Data System (ADS)
Li, Y.; Zheng, Y. F.; Feng, F.; He, Q. B.; Wang, N. X.
2016-08-01
To improve the starting performance of the straight-bladed vertical axis wind turbine (SB-VAWT) at low wind speed, and the output characteristics at high wind speed, a flexible, scalable auxiliary vane mechanism was designed and installed into the rotor of SB-VAWT in this study. This new vertical axis wind turbine is a kind of lift-to-drag combination wind turbine. The flexible blade expanded, and the driving force of the wind turbines comes mainly from drag at low rotational speed. On the other hand, the flexible blade is retracted at higher speed, and the driving force is primarily from a lift. To research the effects of the flexible, scalable auxiliary module on the performance of SB-VAWT and to find its best parameters, the computational fluid dynamics (CFD) numerical calculation was carried out. The calculation result shows that the flexible, scalable blades can automatic expand and retract with the rotational speed. The moment coefficient at low tip speed ratio increased substantially. Meanwhile, the moment coefficient has also been improved at high tip speed ratios in certain ranges.
Joint with application in electrochemical devices
Weil, K Scott [Richland, WA; Hardy, John S [Richland, WA
2010-09-14
A joint for use in electrochemical devices, such as solid oxide fuel cells (SOFCs), oxygen separators, and hydrogen separators, that will maintain a hermetic seal at operating temperatures of greater than 600.degree. C., despite repeated thermal cycling excess of 600.degree. C. in a hostile operating environment where one side of the joint is continuously exposed to an oxidizing atmosphere and the other side is continuously exposed to a wet reducing gas. The joint is formed of a metal part, a ceramic part, and a flexible gasket. The flexible gasket is metal, but is thinner and more flexible than the metal part. As the joint is heated and cooled, the flexible gasket is configured to flex in response to changes in the relative size of the metal part and the ceramic part brought about by differences in the coefficient of thermal expansion of the metal part and the ceramic part, such that substantially all of the tension created by the differences in the expansion and contraction of the ceramic and metal parts is absorbed and dissipated by flexing the flexible gasket.
Stress analysis in a pedicle screw fixation system with flexible rods in the lumbar spine.
Kim, Kyungsoo; Park, Won Man; Kim, Yoon Hyuk; Lee, SuKyoung
2010-01-01
Breakage of screws has been one of the most common complications in spinal fixation systems. However, no studies have examined the breakage risk of pedicle screw fixation systems that use flexible rods, even though flexible rods are currently being used for dynamic stabilization. In this study, the risk of breakage of screws for the rods with various flexibilities in pedicle screw fixation systems is investigated by calculating the von Mises stress as a breakage risk factor using finite element analysis. Three-dimensional finite element models of the lumbar spine with posterior one-level spinal fixations at L4-L5 using four types of rod (a straight rod, a 4 mm spring rod, a 3 mm spring rod, and a 2 mm spring rod) were developed. The von Mises stresses in both the pedicle screws and the rods were analysed under flexion, extension, lateral bending, and torsion moments of 10 Nm with a follower load of 400 N. The maximum von Mises stress, which was concentrated on the neck region of the pedicle screw, decreased as the flexibility of the rod increased. However, the ratio of the maximum stress in the rod to the yield stress increased substantially when a highly flexible rod was used. Thus, the level of rod flexibility should be considered carefully when using flexible rods for dynamic stabilization because the intersegmental motion facilitated by the flexible rod results in rod breakage.
McGhee, Katie E.; Roche, Daniel P.; Bell, Alison M.
2017-01-01
There is increasing evidence that behavioral flexibility is associated with the ability to adaptively respond to environmental change. Flexibility can be advantageous in some contexts such as exploiting novel resources, but it may come at a cost of accuracy or performance in ecologically relevant tasks, such as foraging. Such trade-offs may, in part, explain why individuals within a species are not equally flexible. Here, we conducted a reversal learning task and predation experiment on a top fish predator, the Northern pike (Esox lucius), to examine individual variation in flexibility and test the hypothesis that an individual’s behavioral flexibility is negatively related with its foraging performance. Pikes were trained to receive a food reward from either a red or blue cup and then the color of the rewarded cup was reversed. We found that pike improved over time in how quickly they oriented to the rewarded cup, but there was a bias toward the color red. Moreover, there was substantial variation among individuals in their ability to overcome this red bias and switch from an unrewarded red cup to the rewarded blue cup, which we interpret as consistent variation among individuals in behavioral flexibility. Furthermore, individual differences in behavioral flexibility were negatively associated with foraging performance on ecologically relevant stickleback prey. Our data indicate that individuals cannot be both behaviorally flexible and efficient predators, suggesting a trade-off between these two traits. PMID:29046598
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyer, M. D.; Andre, R.; Gates, D. A.
The high-performance operational goals of NSTX-U will require development of advanced feedback control algorithms, including control of ßN and the safety factor profile. In this work, a novel approach to simultaneously controlling ßN and the value of the safety factor on the magnetic axis, q0, through manipulation of the plasma boundary shape and total beam power, is proposed. Simulations of the proposed scheme show promising results and motivate future experimental implementation and eventual integration into a more complex current profile control scheme planned to include actuation of individual beam powers, density, and loop voltage. As part of this work, amore » flexible framework for closed loop simulations within the high-fidelity code TRANSP was developed. The framework, used here to identify control-design-oriented models and to tune and test the proposed controller, exploits many of the predictive capabilities of TRANSP and provides a means for performing control calculations based on user-supplied data (controller matrices, target waveforms, etc.). The flexible framework should enable high-fidelity testing of a variety of control algorithms, thereby reducing the amount of expensive experimental time needed to implement new control algorithms on NSTX-U and other devices.« less
NASA Astrophysics Data System (ADS)
Boyer, M. D.; Andre, R.; Gates, D. A.; Gerhardt, S.; Goumiri, I. R.; Menard, J.
2015-05-01
The high-performance operational goals of NSTX-U will require development of advanced feedback control algorithms, including control of βN and the safety factor profile. In this work, a novel approach to simultaneously controlling βN and the value of the safety factor on the magnetic axis, q0, through manipulation of the plasma boundary shape and total beam power, is proposed. Simulations of the proposed scheme show promising results and motivate future experimental implementation and eventual integration into a more complex current profile control scheme planned to include actuation of individual beam powers, density, and loop voltage. As part of this work, a flexible framework for closed loop simulations within the high-fidelity code TRANSP was developed. The framework, used here to identify control-design-oriented models and to tune and test the proposed controller, exploits many of the predictive capabilities of TRANSP and provides a means for performing control calculations based on user-supplied data (controller matrices, target waveforms, etc). The flexible framework should enable high-fidelity testing of a variety of control algorithms, thereby reducing the amount of expensive experimental time needed to implement new control algorithms on NSTX-U and other devices.
NASA Astrophysics Data System (ADS)
Bonissone, Stefano R.; Subbu, Raj
2002-12-01
In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.
Design of a Novel Flexible Capacitive Sensing Mattress for Monitoring Sleeping Respiratory
Chang, Wen-Ying; Huang, Chien-Chun; Chen, Chi-Chun; Chang, Chih-Cheng; Yang, Chin-Lung
2014-01-01
In this paper, an algorithm to extract respiration signals using a flexible projected capacitive sensing mattress (FPCSM) designed for personal health assessment is proposed. Unlike the interfaces of conventional measurement systems for poly-somnography (PSG) and other alternative contemporary systems, the proposed FPCSM uses projected capacitive sensing capability that is not worn or attached to the body. The FPCSM is composed of a multi-electrode sensor array that can not only observe gestures and motion behaviors, but also enables the FPCSM to function as a respiration monitor during sleep using the proposed approach. To improve long-term monitoring when body movement is possible, the FPCSM enables the selection of data from the sensing array, and the FPCSM methodology selects the electrodes with the optimal signals after the application of a channel reduction algorithm that counts the reversals in the capacitive sensing signals as a quality indicator. The simple algorithm is implemented in the time domain. The FPCSM system is used in experimental tests and is simultaneously compared with a commercial PSG system for verification. Multiple synchronous measurements are performed from different locations of body contact, and parallel data sets are collected. The experimental comparison yields a correlation coefficient of 0.88 between FPCSM and PSG, demonstrating the feasibility of the system design. PMID:25420152
node2vec: Scalable Feature Learning for Networks
Grover, Aditya; Leskovec, Jure
2016-01-01
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node’s network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks. PMID:27853626
Joint estimation of 2D-DOA and frequency based on space-time matrix and conformal array.
Wan, Liang-Tian; Liu, Lu-Tao; Si, Wei-Jian; Tian, Zuo-Xi
2013-01-01
Each element in the conformal array has a different pattern, which leads to the performance deterioration of the conventional high resolution direction-of-arrival (DOA) algorithms. In this paper, a joint frequency and two-dimension DOA (2D-DOA) estimation algorithm for conformal array are proposed. The delay correlation function is used to suppress noise. Both spatial and time sampling are utilized to construct the spatial-time matrix. The frequency and 2D-DOA estimation are accomplished based on parallel factor (PARAFAC) analysis without spectral peak searching and parameter pairing. The proposed algorithm needs only four guiding elements with precise positions to estimate frequency and 2D-DOA. Other instrumental elements can be arranged flexibly on the surface of the carrier. Simulation results demonstrate the effectiveness of the proposed algorithm.
Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.
Dash, Tirtharaj; Sahu, Prabhat K
2015-05-30
The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.
Computing the Envelope for Stepwise Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2001-01-01
Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.
Accelerating artificial intelligence with reconfigurable computing
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw
Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.
Merging Sounder and Imager Data for Improved Cloud Depiction on SNPP and JPSS.
NASA Astrophysics Data System (ADS)
Heidinger, A. K.; Holz, R.; Li, Y.; Platnick, S. E.; Wanzong, S.
2017-12-01
Under the NOAA GOES-R Algorithm Working Group (AWG) Program, NOAA supports the development of an Infrared (IR) Optimal Estimation (OE) Cloud Height Algorithm (ACHA). ACHA is an enterprise solution that supports many geostationary and polar orbiting imager sensors. ACHA is operational at NOAA on SNPP VIIRS and has been adopted as the cloud height algorithm for the NASA NPP Atmospheric Suite of products. Being an OE algorithm, ACHA is flexible and capable of using additional observations and constraints. We have modified ACHA to use sounder (CriS) observations to improve the cloud detection, typing and height estimation. Specifically, these improvements include retrievals in multi-layer scenarios and improved performance in polar regions. This presentation will describe the process for merging VIIRS and CrIS and a demonstration of the improvements.
How anaesthesiologists understand difficult airway guidelines-an interview study.
Knudsen, Kati; Pöder, Ulrika; Nilsson, Ulrica; Högman, Marieann; Larsson, Anders; Larsson, Jan
2017-11-01
In the practice of anaesthesia, clinical guidelines that aim to improve the safety of airway procedures have been developed. The aim of this study was to explore how anaesthesiologists understand or conceive of difficult airway management algorithms. A qualitative phenomenographic design was chosen to explore anaesthesiologists' views on airway algorithms. Anaesthesiologists working in three hospitals were included. Individual face-to-face interviews were conducted. Four different ways of understanding were identified, describing airway algorithms as: (A) a law-like rule for how to act in difficult airway situations; (B) a cognitive aid, an action plan for difficult airway situations; (C) a basis for developing flexible, personal action plans for the difficult airway; and (D) the experts' consensus, a set of scientifically based guidelines for handling the difficult airway. The interviewed anaesthesiologists understood difficult airway management guidelines/algorithms very differently.
Active mask segmentation of fluorescence microscope images.
Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena
2009-08-01
We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.
Ghosh, A
1988-08-01
Lanczos and conjugate gradient algorithms are important in computational linear algebra. In this paper, a parallel pipelined realization of these algorithms on a ring of optical linear algebra processors is described. The flow of data is designed to minimize the idle times of the optical multiprocessor and the redundancy of computations. The effects of optical round-off errors on the solutions obtained by the optical Lanczos and conjugate gradient algorithms are analyzed, and it is shown that optical preconditioning can improve the accuracy of these algorithms substantially. Algorithms for optical preconditioning and results of numerical experiments on solving linear systems of equations arising from partial differential equations are discussed. Since the Lanczos algorithm is used mostly with sparse matrices, a folded storage scheme to represent sparse matrices on spatial light modulators is also described.
NASA Technical Reports Server (NTRS)
Ha, Kong Q.; Femiano, Michael D.; Mosier, Gary E.
2004-01-01
This viewgraph presentation presents an algorithm for trajectory control of a spacecraft that minimizes the time to perform slews, including settling, by avoiding reaction wheel torque and momentum limits that would excite flexible structural modes. This algorithm was validated by simulation during the design of the NGST 'Yardstick' (precursor to JWST). Performance verification of a reduced form for single-axis slews was carried out using the MIT Origins Testbed. It is currently baselined for use by TPF-Coronagraph.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth; Sewell, Christopher; Usher, William
Here, one of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth; Sewell, Christopher; Usher, William
Execution on massively threaded processors is one of the most critical challenges for high-performance computing (HPC) scientific visualization. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Moreover, our current production scientific visualization software is not designed for these new types of architectures. In order to address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.
Reliability analysis of laminated CMC components through shell subelement techniques
NASA Technical Reports Server (NTRS)
Starlinger, A.; Duffy, S. F.; Gyekenyesi, J. P.
1992-01-01
An updated version of the integrated design program C/CARES (composite ceramic analysis and reliability evaluation of structures) was developed for the reliability evaluation of CMC laminated shell components. The algorithm is now split in two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The new interface program from the finite-element code MARC also includes the option of using hybrid laminates and allows for variations in temperature fields throughout the component.
Adaptive control for eye-gaze input system
NASA Astrophysics Data System (ADS)
Zhao, Qijie; Tu, Dawei; Yin, Hairong
2004-01-01
The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.
Optimal placement of excitations and sensors for verification of large dynamical systems
NASA Technical Reports Server (NTRS)
Salama, M.; Rose, T.; Garba, J.
1987-01-01
The computationally difficult problem of the optimal placement of excitations and sensors to maximize the observed measurements is studied within the framework of combinatorial optimization, and is solved numerically using a variation of the simulated annealing heuristic algorithm. Results of numerical experiments including a square plate and a 960 degrees-of-freedom Control of Flexible Structure (COFS) truss structure, are presented. Though the algorithm produces suboptimal solutions, its generality and simplicity allow the treatment of complex dynamical systems which would otherwise be difficult to handle.
A multistage time-stepping scheme for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1985-01-01
A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.
Urbanowicz, Ryan J; Kiralis, Jeff; Sinnott-Armstrong, Nicholas A; Heberling, Tamra; Fisher, Jonathan M; Moore, Jason H
2012-10-01
Geneticists who look beyond single locus disease associations require additional strategies for the detection of complex multi-locus effects. Epistasis, a multi-locus masking effect, presents a particular challenge, and has been the target of bioinformatic development. Thorough evaluation of new algorithms calls for simulation studies in which known disease models are sought. To date, the best methods for generating simulated multi-locus epistatic models rely on genetic algorithms. However, such methods are computationally expensive, difficult to adapt to multiple objectives, and unlikely to yield models with a precise form of epistasis which we refer to as pure and strict. Purely and strictly epistatic models constitute the worst-case in terms of detecting disease associations, since such associations may only be observed if all n-loci are included in the disease model. This makes them an attractive gold standard for simulation studies considering complex multi-locus effects. We introduce GAMETES, a user-friendly software package and algorithm which generates complex biallelic single nucleotide polymorphism (SNP) disease models for simulation studies. GAMETES rapidly and precisely generates random, pure, strict n-locus models with specified genetic constraints. These constraints include heritability, minor allele frequencies of the SNPs, and population prevalence. GAMETES also includes a simple dataset simulation strategy which may be utilized to rapidly generate an archive of simulated datasets for given genetic models. We highlight the utility and limitations of GAMETES with an example simulation study using MDR, an algorithm designed to detect epistasis. GAMETES is a fast, flexible, and precise tool for generating complex n-locus models with random architectures. While GAMETES has a limited ability to generate models with higher heritabilities, it is proficient at generating the lower heritability models typically used in simulation studies evaluating new algorithms. In addition, the GAMETES modeling strategy may be flexibly combined with any dataset simulation strategy. Beyond dataset simulation, GAMETES could be employed to pursue theoretical characterization of genetic models and epistasis.
Deriving novel relationships from the scientific literature is an important adjunct to datamining activities for complex datasets in genomics and high-throughput screening activities. Automated text-mining algorithms can be used to extract relevant content from the literature and...
NASA Technical Reports Server (NTRS)
Bown, R. L.; Christofferson, A.; Lardas, M.; Flanders, H.
1980-01-01
A lambda matrix solution technique is being developed to perform an open loop frequency analysis of a high order dynamic system. The procedure evaluates the right and left latent vectors corresponding to the respective latent roots. The latent vectors are used to evaluate the partial fraction expansion formulation required to compute the flexible body open loop feedback gains for the Space Shuttle Digital Ascent Flight Control System. The algorithm is in the final stages of development and will be used to insure that the feedback gains meet the design specification.
Improved Evolutionary Hybrids for Flexible Ligand Docking in Autodock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belew, R.K.; Hart, W.E.; Morris, G.M.
1999-01-27
In this paper we evaluate the design of the hybrid evolutionary algorithms (EAs) that are currently used to perform flexible ligand binding in the Autodock docking software. Hybrid EAs incorporate specialized operators that exploit domain-specific features to accelerate an EA's search. We consider hybrid EAs that use an integrated local search operator to reline individuals within each iteration of the search. We evaluate several factors that impact the efficacy of a hybrid EA, and we propose new hybrid EAs that provide more robust convergence to low-energy docking configurations than the methods currently available in Autodock.
Direct model reference adaptive control with application to flexible robots
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory W.
1992-01-01
A modification to a direct command generator tracker-based model reference adaptive control (MRAC) system is suggested in this paper. This modification incorporates a feedforward into the reference model's output as well as the plant's output. Its purpose is to eliminate the bounded model following error present in steady state when previous MRAC systems were used. The algorithm was evaluated using the dynamics for a single-link flexible-joint arm. The results of these simulations show a response with zero steady state model following error. These results encourage further use of MRAC for various types of nonlinear plants.
Chan, An-Wen; Fung, Kinwah; Tran, Jennifer M; Kitchen, Jessica; Austin, Peter C; Weinstock, Martin A; Rochon, Paula A
2016-10-01
Keratinocyte carcinoma (nonmelanoma skin cancer) accounts for substantial burden in terms of high incidence and health care costs but is excluded by most cancer registries in North America. Administrative health insurance claims databases offer an opportunity to identify these cancers using diagnosis and procedural codes submitted for reimbursement purposes. To apply recursive partitioning to derive and validate a claims-based algorithm for identifying keratinocyte carcinoma with high sensitivity and specificity. Retrospective study using population-based administrative databases linked to 602 371 pathology episodes from a community laboratory for adults residing in Ontario, Canada, from January 1, 1992, to December 31, 2009. The final analysis was completed in January 2016. We used recursive partitioning (classification trees) to derive an algorithm based on health insurance claims. The performance of the derived algorithm was compared with 5 prespecified algorithms and validated using an independent academic hospital clinic data set of 2082 patients seen in May and June 2011. Sensitivity, specificity, positive predictive value, and negative predictive value using the histopathological diagnosis as the criterion standard. We aimed to achieve maximal specificity, while maintaining greater than 80% sensitivity. Among 602 371 pathology episodes, 131 562 (21.8%) had a diagnosis of keratinocyte carcinoma. Our final derived algorithm outperformed the 5 simple prespecified algorithms and performed well in both community and hospital data sets in terms of sensitivity (82.6% and 84.9%, respectively), specificity (93.0% and 99.0%, respectively), positive predictive value (76.7% and 69.2%, respectively), and negative predictive value (95.0% and 99.6%, respectively). Algorithm performance did not vary substantially during the 18-year period. This algorithm offers a reliable mechanism for ascertaining keratinocyte carcinoma for epidemiological research in the absence of cancer registry data. Our findings also demonstrate the value of recursive partitioning in deriving valid claims-based algorithms.
Ultrafast adiabatic quantum algorithm for the NP-complete exact cover problem
Wang, Hefeng; Wu, Lian-Ao
2016-01-01
An adiabatic quantum algorithm may lose quantumness such as quantum coherence entirely in its long runtime, and consequently the expected quantum speedup of the algorithm does not show up. Here we present a general ultrafast adiabatic quantum algorithm. We show that by applying a sequence of fast random or regular signals during evolution, the runtime can be reduced substantially, whereas advantages of the adiabatic algorithm remain intact. We also propose a randomized Trotter formula and show that the driving Hamiltonian and the proposed sequence of fast signals can be implemented simultaneously. We illustrate the algorithm by solving the NP-complete 3-bit exact cover problem (EC3), where NP stands for nondeterministic polynomial time, and put forward an approach to implementing the problem with trapped ions. PMID:26923834
Stochastic reaction-diffusion algorithms for macromolecular crowding
NASA Astrophysics Data System (ADS)
Sturrock, Marc
2016-06-01
Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.
The Electrooculogram and a New Blink Detection Algorithm
2015-10-30
applications, and physiological monitoring has proven quite helpful with this assessment. One such physiological signal , the electrooculogram ( EOG ...significantly improve performance. One such physiological signal , the electrooculogram ( EOG ), can provide blink rate and blink duration measures. Blink...that such variability substantiates the need for blink detection algorithms, using the EOG signal , that are robust to noise, artifacts, and intra- and
NASA Astrophysics Data System (ADS)
Kozynchenko, Alexander I.; Kozynchenko, Sergey A.
2017-03-01
In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.
Fast and accurate grid representations for atom-based docking with partner flexibility.
de Vries, Sjoerd J; Zacharias, Martin
2017-06-30
Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Coevolving memetic algorithms: a review and progress report.
Smith, Jim E
2007-02-01
Coevolving memetic algorithms are a family of metaheuristic search algorithms in which a rule-based representation of local search (LS) is coadapted alongside candidate solutions within a hybrid evolutionary system. Simple versions of these systems have been shown to outperform other nonadaptive memetic and evolutionary algorithms on a range of problems. This paper presents a rationale for such systems and places them in the context of other recent work on adaptive memetic algorithms. It then proposes a general structure within which a population of LS algorithms can be evolved in tandem with the solutions to which they are applied. Previous research started with a simple self-adaptive system before moving on to more complex models. Results showed that the algorithm was able to discover and exploit certain forms of structure and regularities within the problems. This "metalearning" of problem features provided a means of creating highly scalable algorithms. This work is briefly reviewed to highlight some of the important findings and behaviors exhibited. Based on this analysis, new results are then presented from systems with more flexible representations, which, again, show significant improvements. Finally, the current state of, and future directions for, research in this area is discussed.
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
NASA Astrophysics Data System (ADS)
Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir
2018-03-01
The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
Functional flexible and wearable supercapacitors
NASA Astrophysics Data System (ADS)
Huang, Yan; Zhi, Chunyi
2017-07-01
Substantial effort has been devoted to endowing flexible and wearable supercapacitors with desirable functions and solving urgent concerns regarding their practical application, particularly materials selection, air permeability, self-healability, shape memory, integration, and modularization. This gives rise to challenges with regard to both suitable materials and device fabrication. This review highlights the current state-of-the-art of these supercapacitors pertinent to materials, fabrication strategies, and performance. Challenges and solutions are also discussed to further improve their practicality. The aim of this review is to make a timely summary of this emerging field and discuss future opportunities and challenges.
Cost and Performance Model for Photovoltaic Systems
NASA Technical Reports Server (NTRS)
Borden, C. S.; Smith, J. H.; Davisson, M. C.; Reiter, L. J.
1986-01-01
Lifetime cost and performance (LCP) model assists in assessment of design options for photovoltaic systems. LCP is simulation of performance, cost, and revenue streams associated with photovoltaic power systems connected to electric-utility grid. LCP provides user with substantial flexibility in specifying technical and economic environment of application.
A Computer System for Making Quick and Economical Color Slides.
ERIC Educational Resources Information Center
Pryor, Harold George
1986-01-01
A computer-based method for producing 35mm color slides has been used in Ohio State University's College of Dentistry. The method can produce both text and slides in less than two hours, providing substantial flexibility in planning and revising visual presentations. (Author/MLW)
NASA Technical Reports Server (NTRS)
Simpson, J. G. (Inventor)
1979-01-01
An improved solar concentrator is characterized by a number of elongated supporting members arranged in substantial horizontal parallelism with the axis and intersecting a common curve. A tensioned sheet of flexible reflective material is disposed in engaging relation with the supporting members in order to impart to the tensioned sheet a catenary configuration.
NASA Technical Reports Server (NTRS)
Vykukal, H. C. (Inventor)
1978-01-01
Joints for use in interconnecting adjacent segments of an hermetically sealed spacesuit which have low torques, low leakage and a high degree of reliability are described. Each of the joints is a special purpose joint characterized by substantially constant volume and low torque characteristics. Linkages which restrain the joint from longitudinal distension and a flexible, substantially impermeable diaphragm of tubular configuration spanning the distance between pivotally supported annuli are featured. The diaphragms of selected joints include rolling convolutions for balancing the joints, while various joints include wedge-shaped sections which enhance the range of motion for the joints.
Active vibration suppression of self-excited structures using an adaptive LMS algorithm
NASA Astrophysics Data System (ADS)
Danda Roy, Indranil
The purpose of this investigation is to study the feasibility of an adaptive feedforward controller for active flutter suppression in representative linear wing models. The ability of the controller to suppress limit-cycle oscillations in wing models having root springs with freeplay nonlinearities has also been studied. For the purposes of numerical simulation, mathematical models of a rigid and a flexible wing structure have been developed. The rigid wing model is represented by a simple three-degree-of-freedom airfoil while the flexible wing is modelled by a multi-degree-of-freedom finite element representation with beam elements for bending and rod elements for torsion. Control action is provided by one or more flaps attached to the trailing edge and extending along the entire wing span for the rigid model and a fraction of the wing span for the flexible model. Both two-dimensional quasi-steady aerodynamics and time-domain unsteady aerodynamics have been used to generate the airforces in the wing models. An adaptive feedforward controller has been designed based on the filtered-X Least Mean Squares (LMS) algorithm. The control configuration for the rigid wing model is single-input single-output (SISO) while both SISO and multi-input multi-output (MIMO) configurations have been applied on the flexible wing model. The controller includes an on-line adaptive system identification scheme which provides the LMS controller with a reasonably accurate model of the plant. This enables the adaptive controller to track time-varying parameters in the plant and provide effective control. The wing models in closed-loop exhibit highly damped responses at airspeeds where the open-loop responses are destructive. Simulations with the rigid and the flexible wing models in a time-varying airstream show a 63% and 53% increase, respectively, over their corresponding open-loop flutter airspeeds. The ability of the LMS controller to suppress wing store flutter in the two models has also been investigated. With 10% measurement noise introduced in the flexible wing model, the controller demonstrated good robustness to the extraneous disturbances. In the examples studied it is found that adaptation is rapid enough to successfully control flutter at accelerations in the airstream of up to 15 ft/sec2 for the rigid wing model and 9 ft/sec2 for the flexible wing model.
Zhao, Quan-Liang; He, Guang-Ping; Di, Jie-Jian; Song, Wei-Li; Hou, Zhi-Ling; Tan, Pei-Pei; Wang, Da-Wei; Cao, Mao-Sheng
2017-07-26
A flexible semitransparent energy harvester is assembled based on laterally aligned Pb(Zr 0.52 Ti 0.48 )O 3 (PZT) single-crystal nanowires (NWs). Such a harvester presents the highest open-circuit voltage and a stable area power density of up to 10 V and 0.27 μW/cm 2 , respectively. A high pressure sensitivity of 0.14 V/kPa is obtained in the dynamic pressure sensing, much larger than the values reported in other energy harvesters based on piezoelectric single-crystal NWs. Furthermore, theoretical and finite element analyses also confirm that the piezoelectric voltage constant g 33 of PZT NWs is competitive to the lead-based bulk single crystals and ceramics, and the enhanced pressure sensitivity and power density are substantially linked to the flexible structure with laterally aligned PZT NWs. The energy harvester in this work holds great potential in flexible and transparent sensing and self-powered systems.
A flexible ligand-based wavy layered metal-organic framework for lithium-ion storage.
An, Tiance; Wang, Yuhang; Tang, Jing; Wang, Yang; Zhang, Lijuan; Zheng, Gengfeng
2015-05-01
A substantial challenge for direct utilization of metal-organic frameworks (MOFs) as lithium-ion battery anodes is to maintain the rigid MOF structure during lithiation/delithiation cycles. In this work, we developed a flexible, wavy layered nickel-based MOF (C20H24Cl2N8Ni, designated as Ni-Me4bpz) by a solvothermal approach of 3,3',5,5'-tetramethyl-4,4'-bipyrazole (H2Me4bpz) with nickel(II) chloride hexahydrate. The obtained MOF materials (Ni-Me4bpz) with metal azolate coordination mode provide 2-dimensional layered structure for Li(+) intercalation/extraction, and the H2Me4bpz ligands allow for flexible rotation feature and structural stability. Lithium-ion battery anodes made of the Ni-Me4bpz material demonstrate excellent specific capacity and cycling performance, and the crystal structure is well preserved after the electrochemical tests, suggesting the potential of developing flexible layered MOFs for efficient and stable electrochemical storage. Copyright © 2015 Elsevier Inc. All rights reserved.
Ultra-precise micro-motion stage for optical scanning test
NASA Astrophysics Data System (ADS)
Chen, Wen; Zhang, Jianhuan; Jiang, Nan
2009-05-01
This study aims at the application of optical sensing technology in a 2D flexible hinge test stage. Optical fiber sensor which is manufactured taking advantage of the various unique properties of optical fiber, such as good electric insulation properties, resistance of electromagnetic disturbance, sparkless property and availability in flammable and explosive environment, has lots of good properties, such as high accuracy and wide dynamic range, repeatable, etc. and is applied in 2D flexible hinge stage driven by PZT. Several micro-bending structures are designed utilizing the characteristics of the flexible hinge stage. And through experiments, the optimal micro-bending tooth structure and the scope of displacement sensor trip under this optimal micro-bending tooth structure are derived. These experiments demonstrate that the application of optical fiber displacement sensor in 2D flexible hinge stage driven by PZT substantially broadens the dynamic testing range and improves the sensitivity of this apparatus. Driving accuracy and positioning stability are enhanced as well. [1,2
NASA Astrophysics Data System (ADS)
Albertani, Roberto
The concept of micro aerial vehicles (MAVs) is for a small, inexpensive and sometimes expendable platform, flying by remote pilot, in the field or autonomously. Because of the requirement to be flown either by almost inexperienced pilots or by autonomous control, they need to have very reliable and benevolent flying characteristics drive the design guidelines. A class of vehicles designed by the University of Florida adopts a flexible-wing concept, featuring a carbon fiber skeleton and a thin extensible latex membrane skin. Another typical feature of MAVs is a wingspan to propeller diameter ratio of two or less, generating a substantial influence on the vehicle aerodynamics. The main objectives of this research are to elucidate and document the static elastic flow-structure interactions in terms of measurements of the aerodynamic coefficients and wings' deformation as well as to substantiate the proposed inferences regarding the influence of the wings' structural flexibility on their performance; furthermore the research will provide experimental data to support the validation of CFD and FEA numerical models. A unique facility was developed at the University of Florida to implement a combination of a low speed wind tunnel and a visual image correlation system. The models tested in the wind tunnel were fabricated at the University MAV lab and consisted of a series of ten models with an identical geometry but differing in levels of structural flexibility and deformation characteristics. Results in terms of full-field displacements and aerodynamic coefficients from wind tunnel tests for various wind velocities and angles of attack are presented to demonstrate the deformation of the wing under steady aerodynamic load. The steady state effects of the propeller slipstream on the flexible wing's shape and its performance are also investigated. Analytical models of the aerodynamic and propulsion characteristics are proposed based on a multi dimensional linear regression analysis of non-linear functions. Conclusions are presented regarding the effects of the wing flexibility on some of the aerodynamic characteristics, including the effects of the propeller on the vehicle characteristics. Recommendations for future work will conclude this work.
Variational optimization algorithms for uniform matrix product states
NASA Astrophysics Data System (ADS)
Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.
2018-01-01
We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
Algorithm for calculating turbine cooling flow and the resulting decrease in turbine efficiency
NASA Technical Reports Server (NTRS)
Gauntner, J. W.
1980-01-01
An algorithm is presented for calculating both the quantity of compressor bleed flow required to cool the turbine and the decrease in turbine efficiency caused by the injection of cooling air into the gas stream. The algorithm, which is intended for an axial flow, air routine in a properly written thermodynamic cycle code. Ten different cooling configurations are available for each row of cooled airfoils in the turbine. Results from the algorithm are substantiated by comparison with flows predicted by major engine manufacturers for given bulk metal temperatures and given cooling configurations. A list of definitions for the terms in the subroutine is presented.
NASA Astrophysics Data System (ADS)
Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying
2018-03-01
This paper addresses the problem of rigid-flexible coupling dynamic modeling and active control of a novel flexible parallel manipulator (PM) with multiple actuation modes. Firstly, based on the flexible multi-body dynamics theory, the rigid-flexible coupling dynamic model (RFDM) of system is developed by virtue of the augmented Lagrangian multipliers approach. For completeness, the mathematical models of permanent magnet synchronous motor (PMSM) and piezoelectric transducer (PZT) are further established and integrated with the RFDM of mechanical system to formulate the electromechanical coupling dynamic model (ECDM). To achieve the trajectory tracking and vibration suppression, a hierarchical compound control strategy is presented. Within this control strategy, the proportional-differential (PD) feedback controller is employed to realize the trajectory tracking of end-effector, while the strain and strain rate feedback (SSRF) controller is developed to restrain the vibration of the flexible links using PZT. Furthermore, the stability of the control algorithm is demonstrated based on the Lyapunov stability theory. Finally, two simulation case studies are performed to illustrate the effectiveness of the proposed approach. The results indicate that, under the redundant actuation mode, the hierarchical compound control strategy can guarantee the flexible PM achieves singularity-free motion and vibration attenuation within task workspace simultaneously. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and efficient controller design of other flexible PMs, especially the emerging ones with multiple actuation modes.
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
NASA Astrophysics Data System (ADS)
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
Pre-operative prediction of surgical morbidity in children: comparison of five statistical models.
Cooper, Jennifer N; Wei, Lai; Fernandez, Soledad A; Minneci, Peter C; Deans, Katherine J
2015-02-01
The accurate prediction of surgical risk is important to patients and physicians. Logistic regression (LR) models are typically used to estimate these risks. However, in the fields of data mining and machine-learning, many alternative classification and prediction algorithms have been developed. This study aimed to compare the performance of LR to several data mining algorithms for predicting 30-day surgical morbidity in children. We used the 2012 National Surgical Quality Improvement Program-Pediatric dataset to compare the performance of (1) a LR model that assumed linearity and additivity (simple LR model) (2) a LR model incorporating restricted cubic splines and interactions (flexible LR model) (3) a support vector machine, (4) a random forest and (5) boosted classification trees for predicting surgical morbidity. The ensemble-based methods showed significantly higher accuracy, sensitivity, specificity, PPV, and NPV than the simple LR model. However, none of the models performed better than the flexible LR model in terms of the aforementioned measures or in model calibration or discrimination. Support vector machines, random forests, and boosted classification trees do not show better performance than LR for predicting pediatric surgical morbidity. After further validation, the flexible LR model derived in this study could be used to assist with clinical decision-making based on patient-specific surgical risks. Copyright © 2014 Elsevier Ltd. All rights reserved.
Methods for the identification of material parameters in distributed models for flexible structures
NASA Technical Reports Server (NTRS)
Banks, H. T.; Crowley, J. M.; Rosen, I. G.
1986-01-01
Theoretical and numerical results are presented for inverse problems involving estimation of spatially varying parameters such as stiffness and damping in distributed models for elastic structures such as Euler-Bernoulli beams. An outline of algorithms used and a summary of computational experiences are presented.
How to Teach Procedures, Problem Solving, and Concepts in Microbial Genetics
ERIC Educational Resources Information Center
Bainbridge, Brian W.
1977-01-01
Flow-diagrams, algorithms, decision logic tables, and concept maps are presented in detail as methods for teaching practical procedures, problem solving, and basic concepts in microbial genetics. It is suggested that the flexible use of these methods should lead to an improved understanding of microbial genetics. (Author/MA)
Distributed Coordination of Energy Storage with Distributed Generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Stoorvogel, Antonie A.
2016-07-18
With a growing emphasis on energy efficiency and system flexibility, a great effort has been made recently in developing distributed energy resources (DER), including distributed generators and energy storage systems. This paper first formulates an optimal coordination problem considering constraints at both system and device levels, including power balance constraint, generator output limits, storage energy and power capacity and charging/discharging efficiencies. An algorithm is then proposed to dynamically and automatically coordinate DERs in a distributed manner. With the proposed algorithm, the agent at each DER only maintains a local incremental cost and updates it through information exchange with a fewmore » neighbors, without relying on any central decision maker. Simulation results are used to illustrate and validate the proposed algorithm.« less
Modeling and Bayesian parameter estimation for shape memory alloy bending actuators
NASA Astrophysics Data System (ADS)
Crews, John H.; Smith, Ralph C.
2012-04-01
In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spotz, William F.
PyTrilinos is a set of Python interfaces to compiled Trilinos packages. This collection supports serial and parallel dense linear algebra, serial and parallel sparse linear algebra, direct and iterative linear solution techniques, algebraic and multilevel preconditioners, nonlinear solvers and continuation algorithms, eigensolvers and partitioning algorithms. Also included are a variety of related utility functions and classes, including distributed I/O, coloring algorithms and matrix generation. PyTrilinos vector objects are compatible with the popular NumPy Python package. As a Python front end to compiled libraries, PyTrilinos takes advantage of the flexibility and ease of use of Python, and the efficiency of themore » underlying C++, C and Fortran numerical kernels. This paper covers recent, previously unpublished advances in the PyTrilinos package.« less
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Sequentially reweighted TV minimization for CT metal artifact reduction.
Zhang, Xiaomeng; Xing, Lei
2013-07-01
Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.
A Programmable Five Qubit Quantum Computer Using Trapped Atomic Ions
NASA Astrophysics Data System (ADS)
Debnath, Shantanu
Quantum computers can solve certain problems more efficiently compared to conventional classical methods. In the endeavor to build a quantum computer, several competing platforms have emerged that can implement certain quantum algorithms using a few qubits. However, the demonstrations so far have been done usually by tailoring the hardware to meet the requirements of a particular algorithm implemented for a limited number of instances. Although such proof of principal implementations are important to verify the working of algorithms on a physical system, they further need to have the potential to serve as a general purpose quantum computer allowing the flexibility required for running multiple algorithms and be scaled up to host more qubits. Here we demonstrate a small programmable quantum computer based on five trapped atomic ions each of which serves as a qubit. By optically resolving each ion we can individually address them in order to perform a complete set of single-qubit and fully connected two-qubit quantum gates and alsoperform efficient individual qubit measurements. We implement a computation architecture that accepts an algorithm from a user interface in the form of a standard logic gate sequence and decomposes it into fundamental quantum operations that are native to the hardware using a set of compilation instructions that are defined within the software. These operations are then effected through a pattern of laser pulses that perform coherent rotations on targeted qubits in the chain. The architecture implemented in the experiment therefore gives us unprecedented flexibility in the programming of any quantum algorithm while staying blind to the underlying hardware. As a demonstration we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms on the five-qubit processor and achieve average success rates of 95 and 90 percent, respectively. We also implement a five-qubit coherent quantum Fourier transform and examine its performance in the period finding and phase estimation protocol. We find fidelities of 84 and 62 percent, respectively. While maintaining the same computation architecture the system can be scaled to more ions using resources that scale favorably (O(N. 2)) with the numberof qubits N.
NASA Astrophysics Data System (ADS)
Zarchi, Milad; Attaran, Behrooz
2017-11-01
This study develops a mathematical model to investigate the behaviour of adaptable shock absorber dynamics for the six-degree-of-freedom aircraft model in the taxiing phase. The purpose of this research is to design a proportional-integral-derivative technique for control of an active vibration absorber system using a hydraulic nonlinear actuator based on the bees algorithm. This optimization algorithm is inspired by the natural intelligent foraging behaviour of honey bees. The neighbourhood search strategy is used to find better solutions around the previous one. The parameters of the controller are adjusted by minimizing the aircraft's acceleration and impact force as the multi-objective function. The major advantages of this algorithm over other optimization algorithms are its simplicity, flexibility and robustness. The results of the numerical simulation indicate that the active suspension increases the comfort of the ride for passengers and the fatigue life of the structure. This is achieved by decreasing the impact force, displacement and acceleration significantly.
HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.
NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems formore » $$\\WW$$ and $$\\HH$$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $$\\WW$$ and $$\\HH$$ within the alternating iterations.« less
Ring system-based chemical graph generation for de novo molecular design
NASA Astrophysics Data System (ADS)
Miyao, Tomoyuki; Kaneko, Hiromasa; Funatsu, Kimito
2016-05-01
Generating chemical graphs in silico by combining building blocks is important and fundamental in virtual combinatorial chemistry. A premise in this area is that generated structures should be irredundant as well as exhaustive. In this study, we develop structure generation algorithms regarding combining ring systems as well as atom fragments. The proposed algorithms consist of three parts. First, chemical structures are generated through a canonical construction path. During structure generation, ring systems can be treated as reduced graphs having fewer vertices than those in the original ones. Second, diversified structures are generated by a simple rule-based generation algorithm. Third, the number of structures to be generated can be estimated with adequate accuracy without actual exhaustive generation. The proposed algorithms were implemented in structure generator Molgilla. As a practical application, Molgilla generated chemical structures mimicking rosiglitazone in terms of a two dimensional pharmacophore pattern. The strength of the algorithms lies in simplicity and flexibility. Therefore, they may be applied to various computer programs regarding structure generation by combining building blocks.
Computational complexities and storage requirements of some Riccati equation solvers
NASA Technical Reports Server (NTRS)
Utku, Senol; Garba, John A.; Ramesh, A. V.
1989-01-01
The linear optimal control problem of an nth-order time-invariant dynamic system with a quadratic performance functional is usually solved by the Hamilton-Jacobi approach. This leads to the solution of the differential matrix Riccati equation with a terminal condition. The bulk of the computation for the optimal control problem is related to the solution of this equation. There are various algorithms in the literature for solving the matrix Riccati equation. However, computational complexities and storage requirements as a function of numbers of state variables, control variables, and sensors are not available for all these algorithms. In this work, the computational complexities and storage requirements for some of these algorithms are given. These expressions show the immensity of the computational requirements of the algorithms in solving the Riccati equation for large-order systems such as the control of highly flexible space structures. The expressions are also needed to compute the speedup and efficiency of any implementation of these algorithms on concurrent machines.
Experiences with serial and parallel algorithms for channel routing using simulated annealing
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
General advancing front packing algorithm for the discrete element method
NASA Astrophysics Data System (ADS)
Morfa, Carlos A. Recarey; Pérez Morales, Irvin Pablo; de Farias, Márcio Muniz; de Navarra, Eugenio Oñate Ibañez; Valera, Roberto Roselló; Casañas, Harold Díaz-Guzmán
2018-01-01
A generic formulation of a new method for packing particles is presented. It is based on a constructive advancing front method, and uses Monte Carlo techniques for the generation of particle dimensions. The method can be used to obtain virtual dense packings of particles with several geometrical shapes. It employs continuous, discrete, and empirical statistical distributions in order to generate the dimensions of particles. The packing algorithm is very flexible and allows alternatives for: 1—the direction of the advancing front (inwards or outwards), 2—the selection of the local advancing front, 3—the method for placing a mobile particle in contact with others, and 4—the overlap checks. The algorithm also allows obtaining highly porous media when it is slightly modified. The use of the algorithm to generate real particle packings from grain size distribution curves, in order to carry out engineering applications, is illustrated. Finally, basic applications of the algorithm, which prove its effectiveness in the generation of a large number of particles, are carried out.
Sensibility study in a flexible job shop scheduling problem
NASA Astrophysics Data System (ADS)
Curralo, Ana; Pereira, Ana I.; Barbosa, José; Leitão, Paulo
2013-10-01
This paper proposes the impact assessment of the jobs order in the optimal time of operations in a Flexible Job Shop Scheduling Problem. In this work a real assembly cell was studied: the AIP-PRIMECA cell at the Université de Valenciennes et du Hainaut-Cambrésis, in France, which is considered as a Flexible Job Shop problem. The problem consists in finding the machines operations schedule, taking into account the precedence constraints. The main objective is to minimize the batch makespan, i.e. the finish time of the last operation completed in the schedule. Shortly, the present study consists in evaluating if the jobs order affects the optimal time of the operations schedule. The genetic algorithm was used to solve the optimization problem. As a conclusion, it's assessed that the jobs order influence the optimal time.
Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R. Todd; Papademetris, Xenophon
2013-01-01
Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project (www.bioimagesuite.org). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences. PMID:23319241
Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon
2013-07-01
Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.
76 FR 20568 - HHS Plan for Retrospective Review Under Executive Order 13563
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
... impact on a substantial number of small businesses as required by the Regulatory Flexibility Act; annual... technologies to facilitate greater participation in the rulemaking process, particularly social media and... actual impact? Public Participation--HHS solicits comments on ways to further engage and increase public...
Using Response-to-Intervention to Enhance Outcomes for Children
ERIC Educational Resources Information Center
VanDerHeyden, Amanda M.; Jimerson, Shane R.
2005-01-01
Response to Intervention (RTI) models have substantial promise for screening, intervention service delivery, and to serve as catalysts for system change to enhance the educational outcomes of children. RTI represents a more flexible service delivery model; however, it is essential to articulate how RTI can be effectively implemented and…
Khan, Arshad; Lee, Sangeon; Jang, Taehee; Xiong, Ze; Zhang, Cuiping; Tang, Jinyao; Guo, L Jay; Li, Wen-Di
2016-06-01
A new structure of flexible transparent electrodes is reported, featuring a metal mesh fully embedded and mechanically anchored in a flexible substrate, and a cost-effective solution-based fabrication strategy for this new transparent electrode. The embedded nature of the metal-mesh electrodes provides a series of advantages, including surface smoothness that is crucial for device fabrication, mechanical stability under high bending stress, strong adhesion to the substrate with excellent flexibility, and favorable resistance against moisture, oxygen, and chemicals. The novel fabrication process replaces vacuum-based metal deposition with an electrodeposition process and is potentially suitable for high-throughput, large-volume, and low-cost production. In particular, this strategy enables fabrication of a high-aspect-ratio (thickness to linewidth) metal mesh, substantially improving conductivity without considerably sacrificing transparency. Various prototype flexible transparent electrodes are demonstrated with transmittance higher than 90% and sheet resistance below 1 ohm sq(-1) , as well as extremely high figures of merit up to 1.5 × 10(4) , which are among the highest reported values in recent studies. Finally using our embedded metal-mesh electrode, a flexible transparent thin-film heater is demonstrated with a low power density requirement, rapid response time, and a low operating voltage. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sun, Xiang-Yu; Zhao, Ping; Jin, Shu-Fang; Liu, Min-Chao; Wang, Xia-Hong; Huang, Yu-Min; Cheng, Zhen-Feng; Yan, Si-Qi; Li, Yan-Yu; Chen, Ya-Qing; Zhong, Yan-Mei
2017-08-01
DNA polymorphism exerts a fascination on a large scientific community. Without crystallographic structural data, clarification of the binding modes between G-quadruplex (G4) and ligand (complex) is a challenging job. In the present work, three porphyrin compounds with different flexible carbon chains (arms) were designed, synthesized and characterized. Their binding, folding and stabilizing abilities to human telomeric G4 DNA structures were comparatively researched. Positive charges at the end of the flexible carbon chains seem to be favorable for the DNA-porphyrin interactions, which were evidenced by the spectral results and further confirmed by the molecular docking calculations. Biological function analysis demonstrated that these porphyrins show no substantial inhibition to Hela, A549 and BEL 7402 cancer cell lines under dark while exhibit broad inhibition under visible light. This significantly enhanced photocytotoxicity relative to the dark control is an essential property of photochemotherapeutic agents. The feature of the flexible arms emerges as critical influencing factors in the cell photocytotoxicity. Moreover, an ROS-mediated mitochondrial dysfunction pathway was suggested for the cell apoptosis induced by these flexible-armed porphyrins. It is found that the porphyrins with positive charges located at the end of the flexible arms represent an exciting opportunity for photochemotherapeutic anti-cancer drug design. Copyright © 2017. Published by Elsevier B.V.
Backfilling with guarantees granted upon job submission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, Vitus Joseph; Bunde, David P.; Lindsay, Alexander M.
2011-01-01
In this paper, we present scheduling algorithms that simultaneously support guaranteed starting times and favor jobs with system desired traits. To achieve the first of these goals, our algorithms keep a profile with potential starting times for every unfinished job and never move these starting times later, just as in Conservative Backfilling. To achieve the second, they exploit previously unrecognized flexibility in the handling of holes opened in this profile when jobs finish early. We find that, with one choice of job selection function, our algorithms can consistently yield a lower average waiting time than Conservative Backfilling while still providingmore » a guaranteed start time to each job as it arrives. In fact, in most cases, the algorithms give a lower average waiting time than the more aggressive EASY backfilling algorithm, which does not provide guaranteed start times. Alternately, with a different choice of job selection function, our algorithms can focus the benefit on the widest submitted jobs, the reason for the existence of parallel systems. In this case, these jobs experience significantly lower waiting time than Conservative Backfilling with minimal impact on other jobs.« less
A new algorithm for microwave delay estimation from water vapor radiometer data
NASA Technical Reports Server (NTRS)
Robinson, S. E.
1986-01-01
A new algorithm has been developed for the estimation of tropospheric microwave path delays from water vapor radiometer (WVR) data, which does not require site and weather dependent empirical parameters to produce high accuracy. Instead of taking the conventional linear approach, the new algorithm first uses the observables with an emission model to determine an approximate form of the vertical water vapor distribution which is then explicitly integrated to estimate wet path delays, in a second step. The intrinsic accuracy of this algorithm has been examined for two channel WVR data using path delays and stimulated observables computed from archived radiosonde data. It is found that annual RMS errors for a wide range of sites are in the range from 1.3 mm to 2.3 mm, in the absence of clouds. This is comparable to the best overall accuracy obtainable from conventional linear algorithms, which must be tailored to site and weather conditions using large radiosonde data bases. The new algorithm's accuracy and flexibility are indications that it may be a good candidate for almost all WVR data interpretation.
EAGLE: 'EAGLE'Is an' Algorithmic Graph Library for Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-01-16
The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. Today there is no tools to conduct "graph mining" on RDF standard data sets. We address that need through implementation of popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, degree distribution,more » diversity degree, PageRank, etc.). We implement these algorithms as SPARQL queries, wrapped within Python scripts and call our software tool as EAGLE. In RDF style, EAGLE stands for "EAGLE 'Is an' algorithmic graph library for exploration. EAGLE is like 'MATLAB' for 'Linked Data.'« less
NASA Astrophysics Data System (ADS)
Wang, Fu; Liu, Bo; Zhang, Lijia; Jin, Feifei; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun
2017-03-01
The wavelength-division multiplexing passive optical network (WDM-PON) is a potential technology to carry multiple services in an optical access network. However, it has the disadvantages of high cost and an immature technique for users. A software-defined WDM/time-division multiplexing PON was proposed to meet the requirements of high bandwidth, high performance, and multiple services. A reasonable and effective uplink dynamic bandwidth allocation algorithm was proposed. A controller with dynamic wavelength and slot assignment was introduced, and a different optical dynamic bandwidth management strategy was formulated flexibly for services of different priorities according to the network loading. The simulation compares the proposed algorithm with the interleaved polling with adaptive cycle time algorithm. The algorithm shows better performance in average delay, throughput, and bandwidth utilization. The results show that the delay is reduced to 62% and the throughput is improved by 35%.
A sonification algorithm for developing the off-roads models for driving simulators
NASA Astrophysics Data System (ADS)
Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia
2018-01-01
In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.
Robust tuning of robot control systems
NASA Technical Reports Server (NTRS)
Minis, I.; Uebel, M.
1992-01-01
The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.
NASA Astrophysics Data System (ADS)
Bu, Yanlong; Zhang, Qiang; Ding, Chibiao; Tang, Geshi; Wang, Hang; Qiu, Rujin; Liang, Libo; Yin, Hejun
2017-02-01
This paper presents an interplanetary optical navigation algorithm based on two spherical celestial bodies. The remarkable characteristic of the method is that key navigation parameters can be estimated depending entirely on known sizes and ephemerides of two celestial bodies, especially positioning is realized through a single image and does not rely on traditional terrestrial radio tracking any more. Actual Earth-Moon group photos captured by China's Chang'e-5T1 probe were used to verify the effectiveness of the algorithm. From 430,000 km away from the Earth, the camera pointing accuracy reaches 0.01° (one sigma) and the inertial positioning error is less than 200 km, respectively; meanwhile, the cost of the ground control and human resources are greatly reduced. The algorithm is flexible, easy to implement, and can provide reference to interplanetary autonomous navigation in the solar system.
Knowledge-based vision for space station object motion detection, recognition, and tracking
NASA Technical Reports Server (NTRS)
Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III
1987-01-01
Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.
How anaesthesiologists understand difficult airway guidelines—an interview study
Knudsen, Kati; Nilsson, Ulrica; Larsson, Anders; Larsson, Jan
2017-01-01
Background In the practice of anaesthesia, clinical guidelines that aim to improve the safety of airway procedures have been developed. The aim of this study was to explore how anaesthesiologists understand or conceive of difficult airway management algorithms. Methods A qualitative phenomenographic design was chosen to explore anaesthesiologists’ views on airway algorithms. Anaesthesiologists working in three hospitals were included. Individual face-to-face interviews were conducted. Results Four different ways of understanding were identified, describing airway algorithms as: (A) a law-like rule for how to act in difficult airway situations; (B) a cognitive aid, an action plan for difficult airway situations; (C) a basis for developing flexible, personal action plans for the difficult airway; and (D) the experts’ consensus, a set of scientifically based guidelines for handling the difficult airway. Conclusions The interviewed anaesthesiologists understood difficult airway management guidelines/algorithms very differently. PMID:29299973
Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.
Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo
2016-08-26
Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.
Scheduling nursing personnel on a microcomputer.
Liao, C J; Kao, C Y
1997-01-01
Suggests that with the shortage of nursing personnel, hospital administrators have to pay more attention to the needs of nurses to retain and recruit them. Also asserts that improving nurses' schedules is one of the most economic ways for the hospital administration to create a better working environment for nurses. Develops an algorithm for scheduling nursing personnel. Contrary to the current hospital approach, which schedules nurses on a person-by-person basis, the proposed algorithm constructs schedules on a day-by-day basis. The algorithm has inherent flexibility in handling a variety of possible constraints and goals, similar to other non-cyclical approaches. But, unlike most other non-cyclical approaches, it can also generate a quality schedule in a short time on a microcomputer. The algorithm was coded in C language and run on a microcomputer. The developed software is currently implemented at a leading hospital in Taiwan. The response to the initial implementation is quite promising.
Analysis of modal behavior at frequency cross-over
NASA Astrophysics Data System (ADS)
Costa, Robert N., Jr.
1994-11-01
The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.
A Linear Bicharacteristic FDTD Method
NASA Technical Reports Server (NTRS)
Beggs, John H.
2001-01-01
The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics [1]-[7]. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to treat the outer computational boundaries naturally using the exact compatibility equations. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional freespace electromagnetic propagation and scattering problems [3], [6], [7]. This paper extends the LBS to model lossy dielectric and magnetic materials. Results are presented for several one-dimensional model problems, and the FDTD algorithm is chosen as a convenient reference for comparison.
Hydraulic containment: analytical and semi-analytical models for capture zone curve delineation
NASA Astrophysics Data System (ADS)
Christ, John A.; Goltz, Mark N.
2002-05-01
We present an efficient semi-analytical algorithm that uses complex potential theory and superposition to delineate the capture zone curves of extraction wells. This algorithm is more flexible than previously published techniques and allows the user to determine the capture zone for a number of arbitrarily positioned extraction wells pumping at different rates. The algorithm is applied to determine the capture zones and optimal well spacing of two wells pumping at different flow rates and positioned at various orientations to the direction of regional groundwater flow. The algorithm is also applied to determine capture zones for non-colinear three-well configurations as well as to determine optimal well spacing for up to six wells pumping at the same rate. We show that the optimal well spacing is found by minimizing the difference in the stream function evaluated at the stagnation points.
Gain in computational efficiency by vectorization in the dynamic simulation of multi-body systems
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Shareef, N. H.
1991-01-01
An improved technique for the identification and extraction of the exact quantities associated with the degrees of freedom at the element as well as the flexible body level is presented. It is implemented in the dynamic equations of motions based on the recursive formulation of Kane et al. (1987) and presented in a matrix form, integrating the concepts of strain energy, the finite-element approach, modal analysis, and reduction of equations. This technique eliminates the CPU intensive matrix multiplication operations in the code's hot spots for the dynamic simulation of the interconnected rigid and flexible bodies. A study of a simple robot with flexible links is presented by comparing the execution times on a scalar machine and a vector-processor with and without vector options. Performance figures demonstrating the substantial gains achieved by the technique are plotted.
Fabrication of cellulose/graphene paper as a stable-cycling anode materials without collector.
Zhang, Chunliang; Cha, Ruitao; Yang, Luming; Mou, Kaiwen; Jiang, Xingyu
2018-03-15
Flexible and foldable devices attract substantial attention in low-cost electronics. Among the flexible substrate materials, paper has several attractive advantages. In our study, we fabricate cellulose/graphene paper by wet end formation (papermaking). The cationic polyacrylamide remarkably improve the retention ratio of graphene of cellulose/graphene slurry. Besides, cellulose/graphene paper exhibits well mechanical properties such as its flexibility and folding endurance. And we replace copper foil collector with cellulose/graphene paper in lithium-ion batteries without collector, and investigate its electrochemical properties. The obtained results show that cellulose/graphene paper presents excellent charge-discharge stability after 1600th cycles as the anode of lithium-ion batteries. These advantages highlight the potential applications of cellulose/graphene paper as anode materials for lithium-ion batteries. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.
A fast D.F.T. algorithm using complex integer transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1978-01-01
Winograd (1976) has developed a new class of algorithms which depend heavily on the computation of a cyclic convolution for computing the conventional DFT (discrete Fourier transform); this new algorithm, for a few hundred transform points, requires substantially fewer multiplications than the conventional FFT algorithm. Reed and Truong have defined a special class of finite Fourier-like transforms over GF(q squared), where q = 2 to the p power minus 1 is a Mersenne prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 61. In the present paper it is shown that Winograd's algorithm can be combined with the aforementioned Fourier-like transform to yield a new algorithm for computing the DFT. A fast method for accurately computing the DFT of a sequence of complex numbers of very long transform-lengths is thus obtained.
Gobin, Oliver C; Schüth, Ferdi
2008-01-01
Genetic algorithms are widely used to solve and optimize combinatorial problems and are more often applied for library design in combinatorial chemistry. Because of their flexibility, however, their implementation can be challenging. In this study, the influence of the representation of solid catalysts on the performance of genetic algorithms was systematically investigated on the basis of a new, constrained, multiobjective, combinatorial test problem with properties common to problems in combinatorial materials science. Constraints were satisfied by penalty functions, repair algorithms, or special representations. The tests were performed using three state-of-the-art evolutionary multiobjective algorithms by performing 100 optimization runs for each algorithm and test case. Experimental data obtained during the optimization of a noble metal-free solid catalyst system active in the selective catalytic reduction of nitric oxide with propene was used to build up a predictive model to validate the results of the theoretical test problem. A significant influence of the representation on the optimization performance was observed. Binary encodings were found to be the preferred encoding in most of the cases, and depending on the experimental test unit, repair algorithms or penalty functions performed best.
Flexible methods for segmentation evaluation: Results from CT-based luggage screening
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2017-01-01
BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346
Hyperspectral Imaging Using Flexible Endoscopy for Laryngeal Cancer Detection
Regeling, Bianca; Thies, Boris; Gerstner, Andreas O. H.; Westermann, Stephan; Müller, Nina A.; Bendix, Jörg; Laffers, Wiebke
2016-01-01
Hyperspectral imaging (HSI) is increasingly gaining acceptance in the medical field. Up until now, HSI has been used in conjunction with rigid endoscopy to detect cancer in vivo. The logical next step is to pair HSI with flexible endoscopy, since it improves access to hard-to-reach areas. While the flexible endoscope’s fiber optic cables provide the advantage of flexibility, they also introduce an interfering honeycomb-like pattern onto images. Due to the substantial impact this pattern has on locating cancerous tissue, it must be removed before the HS data can be further processed. Thereby, the loss of information is to minimize avoiding the suppression of small-area variations of pixel values. We have developed a system that uses flexible endoscopy to record HS cubes of the larynx and designed a special filtering technique to remove the honeycomb-like pattern with minimal loss of information. We have confirmed its feasibility by comparing it to conventional filtering techniques using an objective metric and by applying unsupervised and supervised classifications to raw and pre-processed HS cubes. Compared to conventional techniques, our method successfully removes the honeycomb-like pattern and considerably improves classification performance, while preserving image details. PMID:27529255
Hyperspectral Imaging Using Flexible Endoscopy for Laryngeal Cancer Detection.
Regeling, Bianca; Thies, Boris; Gerstner, Andreas O H; Westermann, Stephan; Müller, Nina A; Bendix, Jörg; Laffers, Wiebke
2016-08-13
Hyperspectral imaging (HSI) is increasingly gaining acceptance in the medical field. Up until now, HSI has been used in conjunction with rigid endoscopy to detect cancer in vivo. The logical next step is to pair HSI with flexible endoscopy, since it improves access to hard-to-reach areas. While the flexible endoscope's fiber optic cables provide the advantage of flexibility, they also introduce an interfering honeycomb-like pattern onto images. Due to the substantial impact this pattern has on locating cancerous tissue, it must be removed before the HS data can be further processed. Thereby, the loss of information is to minimize avoiding the suppression of small-area variations of pixel values. We have developed a system that uses flexible endoscopy to record HS cubes of the larynx and designed a special filtering technique to remove the honeycomb-like pattern with minimal loss of information. We have confirmed its feasibility by comparing it to conventional filtering techniques using an objective metric and by applying unsupervised and supervised classifications to raw and pre-processed HS cubes. Compared to conventional techniques, our method successfully removes the honeycomb-like pattern and considerably improves classification performance, while preserving image details.
Hansen, Jens; Meretzky, David; Woldesenbet, Simeneh; Stolovitzky, Gustavo; Iyengar, Ravi
2017-12-18
Whole cell responses arise from coordinated interactions between diverse human gene products functioning within various pathways underlying sub-cellular processes (SCP). Lower level SCPs interact to form higher level SCPs, often in a context specific manner to give rise to whole cell function. We sought to determine if capturing such relationships enables us to describe the emergence of whole cell functions from interacting SCPs. We developed the Molecular Biology of the Cell Ontology based on standard cell biology and biochemistry textbooks and review articles. Currently, our ontology contains 5,384 genes, 753 SCPs and 19,180 expertly curated gene-SCP associations. Our algorithm to populate the SCPs with genes enables extension of the ontology on demand and the adaption of the ontology to the continuously growing cell biological knowledge. Since whole cell responses most often arise from the coordinated activity of multiple SCPs, we developed a dynamic enrichment algorithm that flexibly predicts SCP-SCP relationships beyond the current taxonomy. This algorithm enables us to identify interactions between SCPs as a basis for higher order function in a context dependent manner, allowing us to provide a detailed description of how SCPs together can give rise to whole cell functions. We conclude that this ontology can, from omics data sets, enable the development of detailed SCP networks for predictive modeling of emergent whole cell functions.
Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.
McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark
2018-07-01
To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.